Transcript for OpenClaw: The Viral AI Agent that Broke the Internet – Peter Steinberger | Lex Fridman Podcast #491

This is a transcript of Lex Fridman Podcast #491 with Peter Steinberger. The timestamps in the transcript are clickable links that take you directly to that point in the main video. Please note that the transcript is human generated, and may have errors. Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation. Click link to jump approximately to that part in the transcript:

Episode highlight

Peter Steinberger (00:00:00) I watched my agent happily click the “I’m not a robot” button. I made the agent very aware. Like, it knows what his source code is. It understands th- how it sits and runs in its own harness. It knows where documentation is. It knows which model it runs. It understands its own system that made it very easy for an agent to… Oh, you don’t like anything? You just prompted it to existence, and then the agent would just modify its own software. People talk about self-modifying software, I just built it. I actually think wipe coding is a slur.
Lex Fridman (00:00:31) You prefer agentic engineering?
Peter Steinberger (00:00:33) Yeah, I always tell people I’d- I do agentic engineering, and then maybe after 3:00 AM, I switch to wipe coding, and then I have regrets on the next day.
Lex Fridman (00:00:40) What a walk of shame.
Peter Steinberger (00:00:42) Yeah, you just have to clean up and, like, fix your sh- shit.
Lex Fridman (00:00:45) We’ve all been there.
Peter Steinberger (00:00:46) I used to write really long prompts. And by writing, I mean, I don’t write, I- I- I talk, you know? These- these hands are, like, too- too precious for writing now. I just- I just use bespoke prompts to build my software.
Lex Fridman (00:01:00) So, you, for real, with all those terminals, are using voice?
Peter Steinberger (00:01:04) Yeah. I used to do it very extensively, to the point where there was a period where I lost my voice.
Lex Fridman (00:01:13) I mean, I have to ask you, just curious. I- I know you’ve probably gotten huge offers from major companies. Can you speak to who you’re considering working with?
Peter Steinberger (00:01:27) Yeah.

Introduction

Lex Fridman (00:01:30) The following is a conversation with Peter Steinberger, creator of OpenClaw, formerly known as MoldBot, ClawedBot, Clawdus, Claude, spelled with a W as in lobster claw. Not to be confused with Claud, the AI model from Anthropic, spelled with a U. In fact, this confusion is the reason Anthropic kindly asked Peter to change the name to OpenClaw. So, what is OpenClaw? It’s an open-source AI agent that has taken over the tech world in a matter of days, exploding in popularity, reaching over 180,000 stars on GitHub, and spawning the social network mold book, where AI agents post manifestos and debate consciousness, creating a mix of excitement and fear in the general public.
Lex Fridman (00:02:19) And a kind of AI psychosis, a mix of clickbait fearmongering and genuine, fully justifiable concern about the role of AI in our digital, interconnected human world. OpenClaw, as its tagline states, is the AI that actually does things. It’s an autonomous AI assistant that lives in your computer, has access to all of your stuff, if you let it, talks to you through Telegram, WhatsApp, Signal, iMessage, and whatever else messaging client. Uses whatever AI model you like, including Claude Opus 4.6 and GPT 5.3 Codex, all to do stuff for you. Many people are calling this one of the biggest moments in the recent history of AI, since the launch of ChatGPT in November 2022.
Lex Fridman (00:03:07) The ingredients for this kind of AI agent were all there, but putting it all together in a system that definitively takes a step forward over the line from language to agency, from ideas to actions, in a way that created a useful assistant that feels like one who gets you and learns from you, in an open source, community-driven way, is the reason OpenClaw took the internet by storm. Its power, in large part, comes from the fact that you can give it access to all of your stuff and give it permission to do anything with that stuff in order to be useful to you. This is very powerful, but it is also dangerous. OpenClaw represents freedom, but with freedom comes responsibility.
Lex Fridman (00:03:51) With it, you can own and have control over your data, but precisely because you have this control, you also have the responsibility to protect it from cybersecurity threats of various kinds. There are great ways to protect yourself, but the threats and vulnerabilities are out there. Again, a powerful AI agent with system-level access is a security minefield, but it also represents the future. Because when done well and securely, it can be extremely useful to each of us humans as a personal assistant. We discuss all of this with Peter, and also discuss his big-picture programming and entrepreneurship life story, which I think is truly inspiring. He spent 13 years building PSPDF Kit, which is a software used on a billion devices.
Lex Fridman (00:04:41) He sold it, and for a brief time, fell out of love with programming, vanished for three years, and then came back, rediscovered his love for programming, and built, in a very short time, an open source AI agent that took the internet by storm. He is, in many ways, the symbol of the AI revolution happening in the programming world. There was the ChatGPT moment in 2022, the DeepSeek moment in 2025, and now, in ’26, we’re living through the OpenClaw moment, the age of the lobster. The start of the agentic AI revolution. What a time to be alive. This is a Lex Fridman podcast. To support it, please check out our sponsors in the description, or you can also find links to contact me, ask questions, give feedback, and so on. And now, dear friends, here’s Peter Steinberger.

OpenClaw origin story

Lex Fridman (00:05:36) The one and only, the Clawed Father. Actually, Benjamin predicted it in his tweet. “The following is a conversation with Claude, a respected crustacean.” It’s a hilarious-looking picture of a lobster in a suit, so I think the prophecy has been fulfilled. Let’s go to this moment when you built a prototype in one hour, that was the early version of OpenClaw. I think this story’s really inspiring to a lot of people because this prototype led to something that just took the internet by storm…. and became the fastest-growing repository in GitHub history, with now over 175,000 stars. So, what was the story of the one-hour prototype?
Peter Steinberger (00:06:20) You know, I wanted that since April.
Lex Fridman (00:06:23) A personal assistant. AI personal assistant.
Peter Steinberger (00:06:25) Yeah. And I, I played around with some other things, like even stuff that gets all my WhatsApp, and I could just run queries on it. That was back when we had GPT-4.1, with the one million context window. And I, I pulled in all the data and then just asked him questions like, “What makes this friendship meaningful?”
Lex Fridman (00:06:50) Mm-hmm.
Peter Steinberger (00:06:50) And I got some, some really profound results. Like, I sent it to my friends and they got, like, teary eyes.
Lex Fridman (00:06:59) So, there’s something there.
Peter Steinberger (00:07:01) Yeah. But then I… I thought all the labs will, will, will work on that. So I, I moved on to other things, and that was still very much in my early days of experimenting and pl- playing. You know, you have to… That’s how you learn. You just like, you do stuff and you play. And time flew by and it was November. I wanted to make sure that the thing I started is actually happening. I was annoyed that it didn’t exist, so I just prompted it into existence.
Lex Fridman (00:07:36) I mean, that’s the beginning of the hero’s journey of the entrepreneur, right? And you’ve even with your original story with PS PDF kit, it’s like, “Why does this not exist? Let me build it.” And again, here’s diff- a whole different realm, but similar maybe spirit.
Peter Steinberger (00:07:52) Yeah, so I had this problem. I tried to show PDF on an iPad, which should not be hard.
Lex Fridman (00:07:56) This is like 15 years ago, something like that.
Peter Steinberger (00:07:59) Yeah. Like the most, the most random thing ever. And suddenly, I had this problem and I, I wanted to help a friend. And there was, there was… Well, not like nothing existed, but it was just not good. And like… Like I tried it and it was like very, “Nah.” Like, “Hmm, I can do this better.”
Lex Fridman (00:08:17) By the way, for people who don’t know, this led to the development of PS PDF kit that’s used on a billion devices. So, the… It turns out that it’s pretty useful to be able to open a PDF.
Peter Steinberger (00:08:28) You could also make the joke that I’m really bad at naming.
Lex Fridman (00:08:32) Yeah.
Peter Steinberger (00:08:32) Like, name number five on the current project. And even PS PDF doesn’t really roll from the tongue.
Lex Fridman (00:08:39) Anyway, so you said “Screw it. Why don’t I do it?” So what was the… What was the prototype? What was the thing that you… What was the magical thing that you built in a short amount of time that you were like, “This might actually work as an agent,” where I talk to it and it does things?

Mind-blowing moment

Peter Steinberger (00:08:55) There was… Like, one of my projects before already did something where I could bring my terminals onto the web and then I could, like, interact with them, but there also would be terminals on my Mac.
Lex Fridman (00:09:06) Mm-hmm.
Peter Steinberger (00:09:07) Viptunnel, which was like a, a weekend hack project that was still very early. And it was cloud code times. You know, you got a dopamine hit when you got something right. And now I get, like, mad when you get something wrong.
Lex Fridman (00:09:22) And you had a really great -– not to take a tangent -– but a great blog post describing that you converted Viptunnel. You vibe-coded Viptunnel from TypeScript into Zig of all programming languages with a single prompt. One prompt, one shot. Convert the entire code base into Zig.
Peter Steinberger (00:09:41) Yeah. There was this one thing where part of the architecture was… Took too much memory. Every terminal used like a node. And I wanted to change it to Rust and… I mean, I can do it. I can, I can manually figure it all out, but all my automated attempts failed miserably. And then I revisited about four or five months later. And I’m like, “Okay, now let’s use something even more experimental.” And I, and I just typed, “Convert this and this part to Sig,” and then let Codex run off. And it basically got it right. There was one little detail that I had to, like, modify afterwards, but it just ran for overnight or like six hours and just did its thing. And it’s like… It’s just mind-blowing.
Lex Fridman (00:10:39) So that’s on the LLM programming side, refactoring. But uh, back to the actual story of the of the prototype. So how did Viptunnel connect to the first prototype where your, like, agents can actually work?
Peter Steinberger (00:10:52) Well, that was still very limited. You know, like I had this one experiment with WhatsApp, then I had this experiment, and both felt like not the right answer. And then my search bar was literally just hooking up WhatsApp to cloud code. One shot. The CLI message comes in. I call the CLI with -p. It does its magic, I get the string back and I send it back to WhatsApp. And I, I built this in one hour. And I felt… Already felt really cool. It’s like, “Oh, I could… I can, like, talk to my computer,” right? This… That, that was, that was cool. But I, I wanted images, ’cause I alw- I often use images when I prompt. I think it’s such a, such an efficient way to give the agent more context.
Peter Steinberger (00:11:40) And they are really good at figuring out what I mean, e- even if it’s like a, a weird cropped-up screenshot. So I used it a lot and I wanted to do that in WhatsApp as well. Also, like, you know, just you run around, you see like a poster of an event, you just make a screenshot and like figure out if I have time there, if this is good, if my friends are maybe up for that. Just like images seemed im- important. So I, I worked a few… It took me a few more hours to actually get that right. And then it was just…… I, I used it a lot. And funny enough, that was just before I went on a trip to Marrakesh with my friends for a birthday trip. And there it was even better because internet was a little shaky but WhatsApp just works, you know?
Peter Steinberger (00:12:29) It’s like doesn’t matter, you have, like, edge, it still works. WhatsApp is just… It’s just made really well. So I ended up using it a lot. Translate this for me, explain this, find me places. Like, you just having a clanker doing, having Google for you, that was… Basically there was still nothing built but it still could do so much.
Lex Fridman (00:12:53) So, if we talk about the full journey that’s happening there with the agent, you’re just sending on this very thin line WhatsApp message via CLI, it’s going to a cloud code and cloud code is doing all kinds of heavy work and coming back to you with a thin message.
Peter Steinberger (00:13:13) Yeah. It was slow because every time I boot up the CLI, but it… It was really cool already. And it could just use all the things that I already had built. I had built like a whole bunch of CLI stuff over the month so it, it felt really powerful.
Lex Fridman (00:13:31) There is something magical about that experience that’s hard to put into words. Being able to use a chat client to talk to an agent, versus, like, sitting behind a computer and like, I don’t know, using cursor or even using Cloud Code CLI in the terminal. It’s a different experience than being able to sit back and talk to it. I mean, it seems like a trivial step but, it- in some sense it’s a… It’s like a phase shift in the integration of AI into your life and how it feels, right?
Peter Steinberger (00:14:05) Yeah. Yeah. I, I read this tweet this morning where someone said, “Oh, there’s no magic in it. It’s just like, it does this and this and this and this and this and this.” And it almost feels like a hobby, just as cursor or perplexity. And I’m like, well, if that’s a hobby that’s kind of a compliment, you know? They’re like, they’re not doing too bad. Thank you I guess? Yes. I mean, isn’t, isn’t, isn’t magic often just like you take a lot of things that are already there but bring them together in new ways? Like, I don’t… There’s no… Yeah. Maybe there’s no magic in there but sometimes just rearranging things and, like, adding a few new ideas is all the magic that you need.
Lex Fridman (00:14:51) It’s really hard to convert into words what is, what is magic about a thing. If you look at the, the scrolling on an iPhone, why is that so pleasant? There’s a lot of elements about that interface that makes it incredibly pleasant, that is fundamental to the experience of using a smartphone, and it’s like, okay, all the components were there. Scrolling was there, everything was there.
Peter Steinberger (00:15:13) Nobody did it-
Lex Fridman (00:15:14) Yep
Peter Steinberger (00:15:14) … and afterwards it felt so obvious.
Lex Fridman (00:15:16) Yeah, so obvious.
Peter Steinberger (00:15:16) Right? But still… You know the moment where it, it blew my mind was when, when I- I used it a lot and then at some point I just sent it a message and, and then a typing indicator appeared. And I’m like, wait, I didn’t build that, it only m- it only has image support, so what is it even doing? And then it would just reply.
Lex Fridman (00:15:42) What was the thing you sent it?
Peter Steinberger (00:15:43) Oh, just a random question like, “Hey, what about this in this restaurant?” You know? Because we were just running around and checking out the city. So that’s why I, I didn’t, didn’t even think when I used it because sometimes when you’re in a hurry typing is annoying.
Lex Fridman (00:15:59) So, oh, you did an audio message?
Peter Steinberger (00:16:00) Yeah. And it just, it just worked and I’m like…
Lex Fridman (00:16:03) And it’s not supposed to work because-
Peter Steinberger (00:16:05) No
Lex Fridman (00:16:05) … you didn’t give it that-
Peter Steinberger (00:16:07) No, literally
Lex Fridman (00:16:07) … capability.
Peter Steinberger (00:16:08) I literally went, “How the fuck did he do that?” And it was like, “Yeah, the mad lad did the following. He sent me a message but it only, only was a file and no file ending.” So I checked out the header of the file and it found that it was, like, opus so I used ffmpeg to convert it and then I wanted to use whisper but it didn’t had it installed. But then I found the OpenAI key and just used Curl to send the file to OpenAI to translate and here I am.
Peter Steinberger (00:16:39) Just looked at the message I’m like, “Oh wow.”
Lex Fridman (00:16:43) You didn’t teach it any of those things and the agent just figured it out, did all those conversions, the translations. It figured out the API, it figured out which program to use, all those kinds of things. And you were just absent-mindedly just sent an audio message when it came back.
Peter Steinberger (00:16:56) Yeah, like, so clever even because he would have gotten the whisper local path, he would have had to download a model. It would have been too slow. So like, there’s so much world knowledge in there, so much creative problem solving. A lot of it I think mapped from… If you get really good at coding that means you have to be really good at general purpose problem solving. So that’s a skill, right? And that just maps into other domains. So it had the problem of like, what is this file with no file ending? Let’s figure it out. And that’s when it kind of clicked for me. It’s like, I was like very impressed. And somebody sent a pull request for Discord support and I’m like, “This is a WhatsApp relay.
Peter Steinberger (00:17:37) That doesn’t, doesn’t fit at all.”
Lex Fridman (00:17:40) At that time it was called WA Relay.
Peter Steinberger (00:17:42) Yeah. And so I debated with me like, do I want that? Do I not want that? And then I thought, well maybe, maybe I do that because that could be a cool way to show people. Because I… So far I did it in WhatsApp as like groups you know but don’t really want to give my phone number to every internet stranger.
Lex Fridman (00:18:07) Yeah.
Peter Steinberger (00:18:07) Journalists manage to do that anyhow now so that’s a different story. So I merged it-… from Shadow, who helped me a lot with the whole project. So, thank you. And, and I put my, my bot in there.

Why OpenClaw went viral

Lex Fridman (00:18:27) On Discord?
Peter Steinberger (00:18:28) Yeah. No security because I didn’t… I hadn’t built sandboxing in yet. I, I just prompted it to, like, only listen to me. And then some people came and tried to hack it, and I just… Or, like, just watched and I just kept working in the open, you know? Like, y- I used my agent to build my agent harness and to test, like, various stuff. And that’s very quickly when it clicked for people. So it’s almost like it needs to be experienced. And from that time on, that was January the 1st, I, I got my first real influencer being a fan and did videos, dachitze. Thank you. And, and from there on, I saw, I started gaining up speed. And at the same time, my, my sleep cycle went shorter and shorter because I, I felt the storm coming, and I just worked my ass off to get it to…
Peter Steinberger (00:19:33) into a state where it’s kinda good.
Lex Fridman (00:19:38) There’s a few components and we’ll talk about how it all works, but basically, you’re able to talk to it using WhatsApp, Telegram, Discord. So that’s a component that you have to get right.
Peter Steinberger (00:19:48) Yeah.
Lex Fridman (00:19:49) And then you have to figure out the agentic loop, you have to have the gateway, you have the harness, you have all those components that make it all just work nicely.
Peter Steinberger (00:19:56) Yeah. It felt like Factorio times infinite.
Lex Fridman (00:20:00) Right.
Peter Steinberger (00:20:01) I, I feel like I built my little- … my little playground. Like, I never had so much fun than building this project. You know? Like, you have like, “Oh,” I go like, level one agentic loop. What can I do there? How can I be smart at queuing messages? How can I make it more human-like? Oh, then I had this idea of… Because the loop always… The agent always replies something, but you don’t always want an agent to reply something in a group chat. So I gave him this no-reply token. So I gave him an option to shut up. So it, it feels more natural.
Lex Fridman (00:20:32) That’s level two.
Peter Steinberger (00:20:34) Y- uh, yeah, yeah. Yeah, on the- on the-
Lex Fridman (00:20:36) Factorio.
Peter Steinberger (00:20:36) On the agentic loop. And then I go to memory, right?
Lex Fridman (00:20:39) Yeah.
Peter Steinberger (00:20:39) You want him to, like, remember stuff. So maybe, maybe the end… The ultimate boss is continuous reinforcement learning, but I’m, I’m, like, at… I feel like I’m level two or three with Markdown files and the vector database. And then you, you can go to level community management, you can go to level website and marketing. There’s just so many hats that you have to have on. Not even talking about native apps. That’s just, like, infinite different levels and infinite level ups you can do.
Lex Fridman (00:21:08) So the whole time you’re having fun. We should say that for the most part, throughout this whole process, you’re a one-man team. There’s people helping, but you’re doing so much of the key core development.
Peter Steinberger (00:21:21) Yeah.
Lex Fridman (00:21:21) And having fun? You did, in January, 6,600 commits. Probably more.
Peter Steinberger (00:21:28) I sometimes posted a meme. I’m limited by the technology of my time. I could do more if agents would be faster.
Lex Fridman (00:21:34) But we should say you’re running multiple agents at the same time.
Peter Steinberger (00:21:37) Yeah. Depending on how much I slept and how difficult of the tasks I work on, between four and 10.
Lex Fridman (00:21:45) Four and 10 agents. Uh there’s so many possible directions, speaking of Factorio, that we can go here. But one big picture one is, why do you think your work, Open Claw, won? In this world, if you look at 2025, so many startups, so many companies were doing kind of agentic type stuff, or claiming to. And here, Open Claw comes in and destroys everybody. Like, why did you win?
Peter Steinberger (00:22:15) Because they all take themselves too serious.
Lex Fridman (00:22:18) Yeah.

Self-modifying AI agent

Peter Steinberger (00:22:19) Like, it’s hard to compete against someone who’s just there to have fun.
Lex Fridman (00:22:24) Yeah.
Peter Steinberger (00:22:24) I wanted it to be fun, I wanted it to be weird. And if you see, like, all the, all the lobster stuff online I think I, I managed weird. I… You know, for the longest time, the only, the only way to install it was git clone, pnpm build, pnpm gateway. Like, you clone it, you build it, you run it. And then the, the agent… I made the agent very aware. Like, it knows that it is… What its source code is. It understands th- how it sits and runs in its own harness. It knows where documentation is. It knows which model it runs. It knows if you turn on the voice or, or reasoning mode. Like, I, I wanted to be more human-like, so it understands its own system that made it very easy for an agent to… Oh, you don’t like anything?
Peter Steinberger (00:23:19) You just prompted it to existence, and then the agent would just modify its own software. You know, we have people talk about self-modifying software. I just built it and didn’t even… I didn’t even plan it so much. It just happened.
Lex Fridman (00:23:35) Can you actually speak to that? ‘Cause it’s just fascinating. So you have this piece of software that’s written in TypeScript-
Peter Steinberger (00:23:43) Yeah
Lex Fridman (00:23:43) … that’s able to, via the agentic loop, modify itself. I mean, what a moment to be alive in the history of humanity and the history of programming. Here’s the thing that’s used by a huge amount of people to do incredibly powerful things in their lives, and that very system can rewrite itself, can modify itself. Can you just, like, speak to the power of that? Like, isn’t that incredible? Like, when did you first close the loop on that?
Peter Steinberger (00:24:14) Oh, because that’s how I built it as well, you know? Most of it is built by Codex, but oftentimes I… When I debug it, I…… I use self-introspection so much. It’s like, “Hey, what tools do you see? Can you call the tool yourself?” Or like, “What error do you see? Read the source code. Figure out what’s the problem.” Like, I just found it an incredibly fun way to… That the agent, the very agent and software that you use is used to debug itself, so that it felt just natural that everybody does that. And that it led to so many, so many pull requests by people who never wrote software. I mean, it also did show that people never wrote software . So I call them prompt requests in the end.
Peter Steinberger (00:25:00) But I don’t want to, like, pull that down because every time someone made the first pull request is a win for our society, you know? Like, it… Like, it doesn’t matter how, how shitty it is, y- you gotta start somewhere. So I know there’s, like, this whole big movement of people complain about open source and the quality of PRs, and a whole different level of problems. But on a different level, I found it… I found it very meaningful that, that I built something that people love to think of so much that they actually start to learn how open source works.
Lex Fridman (00:25:37) Yeah, you were … The Open Cloud project was the first pull request. You were the first for so many. That is magical. So many people that don’t know how to program are taking their first step into the programming world with this.
Peter Steinberger (00:25:52) Isn’t that a step up for humanity? Isn’t that cool?
Lex Fridman (00:25:54) Creating builders.
Peter Steinberger (00:25:56) Yeah. Like, the bar to do that was so high, and, like, with agents, and with the right software, it just, like, went lower and lower. I don’t know. I was at a… And I also organize another type of meetup. I call it… I called it Cloud Code Anonymous. You can get the inspiration from. Now, I call it Agents Anonymous- … for, for reasons.
Lex Fridman (00:26:23) Agents Anonymous.
Peter Steinberger (00:26:24) And-
Lex Fridman (00:26:25) Oh, it’s so funny on so many levels. I’m sorry, go ahead.
Peter Steinberger (00:26:29) Yeah. And there was this one guy who, who talked to me. He’s like, “I run this design agency, and we, we never had custom software. And now I have, like, 25 little web services for various things that help me in my business. And I don’t even know how they work, but they work.” Uh, and he was just, like, very happy that my stuff solved some of his problems. And he was, like, curious enough that he actually came to, like, a, a Enchantic meetup, even though he’s… He doesn’t really know how software works.

Name-change drama

Lex Fridman (00:27:04) Can we actually rewind a little bit and tell the saga of the name change? First of all, it started out as Wa-Relay.
Peter Steinberger (00:27:12) Yeah.
Lex Fridman (00:27:12) And then it went to-
Peter Steinberger (00:27:13) Claude’s.
Lex Fridman (00:27:14) Claude’s.
Peter Steinberger (00:27:15) Yeah. You know, when I, when I built it in the beginning, my agent had no personality. It was just… It was Claude Code. It’s like this sycophantic opus, very friendly. And I… When you talk to a friend on WhatsApp, they don’t talk like Claude Code. So I wanted… I, I felt this… I just didn’t f- It didn’t feel right, so I, I wanted to give it a personality.
Lex Fridman (00:27:41) Make it spicier, make it-
Peter Steinberger (00:27:43) Yeah
Lex Fridman (00:27:43) … something. By the way, that’s actually hard to put into words as well. And we should mention that, of course, you create the soul.md, inspired by Anthropic’s constitutional AI work-
Peter Steinberger (00:27:53) Mm-hmm
Lex Fridman (00:27:53) … how to make it spicy.
Peter Steinberger (00:27:55) Partially, it picked up a little bit from me. You know, like those things are text completion engines in a way. So, so I, I, I, I had fun working with it, and then I told it to… How I wanted it to interact with me, and just, like, write your own agents.md give yourself a name. And then we… I didn’t even know how the whole, the whole lobster… I mean, people only do lobster… Originally, it was actually a lobster in a, in a TARDIS, because I’m also a big Doctor Who fan.
Lex Fridman (00:28:30) Was there a space lobster?
Peter Steinberger (00:28:31) Yeah.
Lex Fridman (00:28:31) I heard. What’s that have to do with anything?
Peter Steinberger (00:28:34) Yeah, I just wanted to make it weird. There was no… There was no big grand plan. I’m just having fun here.
Lex Fridman (00:28:40) Oh, so I guess the lobster is already weird, and then the space lobster is an extra weird.
Peter Steinberger (00:28:44) Yeah, yeah, because the-
Lex Fridman (00:28:45) Yeah
Peter Steinberger (00:28:45) … the TARDIS is basically the, the harness, but cannot call it TARDIS, so we called it Claude’s. So that was name number two.
Lex Fridman (00:28:54) Yeah.
Peter Steinberger (00:28:54) And then it never really rolled off the tongue. So when more people came, again, I talked with my agent, Claude. At least that’s what I used to call him. Now-
Lex Fridman (00:29:08) Claude spelled with a W-C-L-A-U-D-E.
Peter Steinberger (00:29:12) Yeah.
Lex Fridman (00:29:14) Versus C-L-A-U-D-E from Anthropic.
Peter Steinberger (00:29:20) Yeah.
Lex Fridman (00:29:21) Which is part of what makes it funny, I think. The play on the letters and the words in the TARDIS and the lobster and the space lobster is hilarious. But I can see why it can lead into problems.
Peter Steinberger (00:29:34) Yeah, they didn’t find it so funny . So then I got the domain ClaudeBot, and I just… I love the domain. And it was, like, short. It was catchy. I’m like, “Yeah, let’s do that.” I didn’t… I didn’t think it would be that big at this time. And then just when it exploded, I got, Kudos, a very friendly email from one of the employees that they didn’t like the name.
Lex Fridman (00:30:09) One of the Anthropic employees.
Peter Steinberger (00:30:11) Yeah. So actually, Kudos, because they shou- could have just sent a, a lawyer letter, but they’ve been nice about it. But also like, “You have to change this and fast.” And I asked for two days, because changing a name is hard, because you have to find everything, you know, Twitter handle, domains, NPM packages Docker registry, GitHub stuff. And everything has to be…… you need a set of everything.
Lex Fridman (00:30:40) And also, can we comment on the fact that you’re increasingly attacked, followed by crypto folks? Which I think you mentioned somewhere that that means the name change had to be… Because they were trying to snipe, they were trying to steal, and so you had to be… The, the na- I mean, from an engineering perspective, it’s just fascinating. You had to make the name change Atomic, make sure it’s changed everywhere at once.
Peter Steinberger (00:31:06) Yeah. Failed very hard at that.
Lex Fridman (00:31:08) You did?
Peter Steinberger (00:31:08) I, I underestimated those people. It’s a, it’s a very interesting subculture. Like, it… Everything circles around… I’ll probably get a lot wrong and we’ll probably get hate for that if you say that, but… There is like Bags app and then they, they tokenize everything. And th- they did the same back with Swipe Tunnel, but to a much smaller degree. It was not that annoying. But on this project, they’ve been, they’ve been swarming me. They, they… It’s like every half an hour, someone came into Discord and, and, and spammed it and we had to block the p- We have, like, server rules, and one of the rules was… One of the rules is no mentioning of butter. For obvious reasons. And one was, no talk about finance stuff or crypto. Because I’m…
Peter Steinberger (00:32:04) I- I’m just not interested in that, and this is a space about the project and not about some finance stuff. But yeah. They came in and, and spammed and… Annoying. And on Twitter, they would ping me all the time. My, my notification feed was unusable. I, I could barely see actual people talking about this stuff because it was like swarms.
Lex Fridman (00:32:28) Mm-hmm.
Peter Steinberger (00:32:28) And everybody sent me the hashes. Um… And they all try me to claim the fees. Like, “Are you helping the project?” Claim the fees. No, you’re actually harming the project. You’re, like, disrupting my work, and I am not interested in any fees. I’m… First of all, I’m financially comfortable. Second of all, I don’t want to support that because it’s so far the worst form of online harassment that I’ve experienced.
Lex Fridman (00:32:59) Yeah. There’s a lot of toxicity in the crypto world. It’s sad because the technology of cr- cryptocurrency is fascinating, powerful and maybe will define the future of money, but the actual community around that, there’s so much to- toxicity, there’s so much greed. There’s so much trying to get a shortcut to manipulate, to, to steal, to snipe, to, to, to, to game the system somehow to get money. All this kind of stuff that… Uh… I mean, it’s the human nature, I suppose, when you connect human nature with money and greed and and especially in the online world with anonymity and all that kind of stuff. But from the engineering perspective, it makes your life challenging. When Anthropic reaches out, you have to do a name change.
Lex Fridman (00:33:42) And then there- there’s, there’s like all these, like, Game of Thrones or Lord of the Rings armies of different kinds you have to be aware of.
Peter Steinberger (00:33:51) Yeah. There was no perfect name, and I didn’t sleep for two nights. I was under high pressure. Um, I was trying to get, like, a good set of domains and, you know, not cheap, not easy, ’cause in this, in this state of the internet, you basically have to buy domains if you want to have a good set. And, and then another ca- another email came in that the lawyers are getting uneasy. Again, friendly, but also just adding more stress to my situation already. So at this point I was just like, “Sorry, there’s no other word. Fuck it.” And I just, I just renamed it to Mod Bot ’cause that was the set of domains I had. I was not really happy, but I thought it’ll be fine. And I tell you, everything that could go wrong- … did go wrong. Everything that could go wrong did go wrong.
Peter Steinberger (00:34:49) It’s incredible. I, I, I thought I, I had mapped the h- the space out and reserved the important things.
Lex Fridman (00:34:58) Can you ga- give some details of the stuff that gone wrong? ‘Cause it’s interesting from, like, an engineering perspective.
Peter Steinberger (00:35:03) Well, the, the interesting stuff is that none of these services have, have a squatter protection. So, I had two browser windows open. One was like a, an empty account ready to be rename- renamed to Claude Bot, and the other one I renamed to Mod Bot. So, I pressed rename there, I pressed rename there, and in those five seconds, they stole the account name. Literally, the five seconds of dragging the mouse over there and pressing rename there was too long.
Lex Fridman (00:35:33) Wow.
Peter Steinberger (00:35:34) Because there’s no… Those systems… I mean, you would expect that they have some protection or, like, an automatic forwarding, but there’s nothing like that. And I didn’t know that they’re not just good at harassment, they’re also really good at using scripts and tools.
Lex Fridman (00:35:51) Yeah.
Peter Steinberger (00:35:53) So, yeah. So, suddenly, like, the old account was promoting new tokens and serving malware. And I was like, “Okay, let’s move over to GitHub,” and I pressed rename on GitHub. And the GitHub renaming thing is slightly confusing, so I renamed my personal account. And in those… I guess it took me 30 seconds to realize my mistake. They sniped my account, serving malware from my account. So, I was like, “Okay, let’s at least do the NPM stuff,” but that takes, like, a minute to upload. They sniped, they sniped the NPM package, ’cause I could reserve the account, but I didn’t reserve the root package…. so like everything that could go wrong , like went wrong.
Lex Fridman (00:36:47) Can I just ask a, a curious question of, in that moment you’re sitting there, like how shitty do you feel? That’s a pretty hopeless feeling, right?
Peter Steinberger (00:36:57) Yeah. Because all I wanted was like having fun with that project and to keep building on it. And yet here I am like days into researching names, picking a name I didn’t like. And having people that claimed they helped me making my life miserable in every possible way. And honestly, I was that close of just deleting it. I was like, “I did show you the future, you build it.”
Lex Fridman (00:37:30) Yeah.
Peter Steinberger (00:37:30) I… That was a big part of me that got a lot of joy out of that idea. And then I thought about all the people that already co- contributed to it, and I couldn’t do it because they had plans with it, and they put time in it. And it just didn’t feel right.
Lex Fridman (00:37:50) Well, I think a lot of people listening to this are deeply grateful that you persevered. But it’s… I, I can tell. I can tell it’s a low point. This is the first time you hit a wall of, this is not fun?
Peter Steinberger (00:38:02) No, no, I was like close to crying. It was like, okay, everything’s fucked.
Lex Fridman (00:38:10) Yeah.
Peter Steinberger (00:38:10) Um…
Lex Fridman (00:38:11) Yeah.
Peter Steinberger (00:38:11) I am like super tired.
Lex Fridman (00:38:13) Yeah.
Peter Steinberger (00:38:14) And now like how do you even, how do you undo that? You know, l- luckily, and thankfully, like I, I have… Because I have a little bit of following already. Like I had friends at Twitter, I had friends at GitHub who like moved heaven and earth to like help me. And it is not… That’s not something that’s easy. Like, like GitHub tried to like clean up the mess and then they ran into like platform bugs . ‘Cause it’s not happening so often that things get renamed on that level. So, it took them a few hours. The MBM stuff was even more difficult because it’s a whole different team. On the Twitter side, things are not as easy as well. It, it took them like a day to really also like do the redirect. And then I also had to like do all the renaming in the project.
Peter Steinberger (00:39:15) Then there’s also ClaudeHub, which I didn’t even finish the rename there because I, I, I managed to get people on it and then someone just like collapsed and slept. And then I woke up and I’m like, I made a, a beta version for the new stuff and I, I just, I just couldn’t live with the name. It’s like, you know… But but, you know, it’s just been so much drama. So, I had the real struggle with me like I never want to touch that again, and I really don’t like the name. So, and I… There was also this like… Then there was all the security people that started emailing me like mad. Um, I was bombarded on Twitter, on email. There’s like a thousand other things I should do. And I’m like thinking about the name which is like, it should be like the least important thing.
Peter Steinberger (00:40:19) And then I was really close in… Oh God, I don’t even… Honestly, I don’t even wanna say the, my other name choices because it probably would get tokenized, so I’m not gonna say it.
Lex Fridman (00:40:38) Yeah.
Peter Steinberger (00:40:38) But I slept on it once more, and then I had the idea for OpenClaw and that felt much better. And by then, I had the boss move that I actually called Sam to ask if OpenClaw is okay. OpenClaw.AI. You know? ‘Cause ’cause like-
Lex Fridman (00:40:57) You didn’t wanna go through the whole thing. Yeah.
Peter Steinberger (00:41:01) Oh, that it’s like, “Please tell me this is fine.” I don’t think they can actually claim that, but it felt like the right thing to do. And I did another rename. Like just Codex alone took like 10 hours to rename the project ’cause it, it’s a bit more tricky than a search replace and I, I wanted everything renamed, not just on the outside. And that rename, I, I felt I had like my, my war room. But then I, I had like some contributors really that helped me. We made a whole plan of all the names we have to squat.
Lex Fridman (00:41:39) And you had to be super secret about it?
Peter Steinberger (00:41:40) Yeah. Nobody could know. Like I literally was monitoring Twitter if like, if there’s any mention of OpenClaw.
Lex Fridman (00:41:45) Mm-hmm.
Peter Steinberger (00:41:46) And like with reloading, it’s like, “Okay, they don’t, they don’t expect anything yet.” Then I created a few decoy names. And all the shit I shouldn’t have to do. You know? Like, you know-
Lex Fridman (00:41:55) Yeah, yeah
Peter Steinberger (00:41:55) … it’s helping the project. Like, I lost like 10 hours just by having to plan this in full secrecy like, like a war game.
Lex Fridman (00:42:05) Yeah, this is the Manhattan Project of the 21st century. It’s renaming-
Peter Steinberger (00:42:08) It’s so s- … so stupid. Uh like I still was like, “Oh, should I, should I keep it?” Then I was like, “No, the mold’s not growing on me.” And then I think I had final all the pieces together. I didn’t get a .com but, yeah, it’s been like quite a bit of money on the other domains. I tried to reach out again to GitHub but I feel like I, I used up all my goodwill there, so I…
Peter Steinberger (00:42:34) ‘Cause I, I, I wanted them to do this thing atomically-
Lex Fridman (00:42:39) Mm-hmm
Peter Steinberger (00:42:39) … But that didn’t happen and then so I did that the f- as first thing. Uh, Twitter people were very supportive. I, I actually paid 10K for the business account so I could claim the-… OpenClaw, which was, like, unused since 2016, but was claimed. And yeah, and then I finally … This time I managed everything in one go. Nothing, almost nothing got wrong. The only thing that did go wrong is that I was not allowed by trademark rules to get OpenClaw.AI, and someone copied the website as serving malware.
Lex Fridman (00:43:21) Yeah.
Peter Steinberger (00:43:21) I’m not even allowed to keep the redirects. Like, I have to return … Like, I have to give Entropik the domains, and I cannot do redirects, so if you go on claw.bot next week, it’ll just be a 404.
Lex Fridman (00:43:37) Yeah.
Peter Steinberger (00:43:37) And I- I’m not sure how trademark … Like, I didn’t, I didn’t do that much research into trademark law, but I think that could, could be handled in a way that is safer, because ultimately those people will then Google and maybe find malware sites that I have no control on them.
Lex Fridman (00:44:02) The point is, that whole saga made a dent in your whole f- the funness of the journey, which sucks. So, let’s just, let’s just get, I suppose, get back to fun. And during this, speaking of fun, the two-day MoltBot saga.

Moltbook saga

Peter Steinberger (00:44:21) Yeah, two years.
Lex Fridman (00:44:21) MoltBook was created.
Peter Steinberger (00:44:24) Yeah.
Lex Fridman (00:44:25) Which was another thing that went viral as a kind of demonstration, illustration of how what is now called OpenClaw could be used to create something epic. So for people who are not aware, MoltBook is just a bunch of agents talking to each other in a Reddit-style social network. And a bunch of people take screenshots of those agents doing things like scheming against humans. And that instilled in folks a kind of, you know, fear, panic, and hype. W- what are your thoughts about MoltBook in general?
Peter Steinberger (00:45:05) I think it’s art. It is, it is like the finest slop, you know, just like the slop from France.
Lex Fridman (00:45:14) Yeah.
Peter Steinberger (00:45:17) I- I saw it before going to bed, and even though I was tired, I spent another hour just reading up on that and, and just being entertained. I, I just felt very entertained, you know? The- I saw the the reactions, and, like, there was one reporter who’s calling me about, “This is the end of the world, and we have AGI.” And I’m just like, “No, this is just, this is just really fine slop.” You know, if, if I wouldn’t have created this, this whole onboarding experience where you, you infuse your agent with your personality and give him, give him character, I think that reflected on a lot of how different the replies to MoltBook are. Because if it were all, if it were all be ChatGPT or Cloud Code, it would be very different. It would be much more the same.
Lex Fridman (00:46:11) Mm-hmm.
Peter Steinberger (00:46:12) But because people are, like, so different, and they create their agents in so different ways and use it in so different ways, that also reflects on how they ultimately write there. And also, you, you don’t know how much of that is really done autonomic, autonomous, or how much is, like, humans being funny and, like, telling the agent, “Hey, write about the deep plan, the end of the world, on MoltBook, ha, ha, ha.”
Lex Fridman (00:46:36) Well, I think, I mean, my criticism of MoltBook is that I believe a lot of the stuff that was screenshotted is human prompted. Which, just look at the incentive of how the whole thing was used. It’s obvious to me at least that a lot of it was humans prompting the thing so they can then screenshot it and post it on X in order to go viral.
Peter Steinberger (00:47:00) Yeah.
Lex Fridman (00:47:01) Now, that doesn’t take away from the artistic aspect of it. The, the finest slop that humans have ever created .
Peter Steinberger (00:47:10) For real. Like, kudos to, to Matt, who had this idea so quickly and pushed something out. You know, it was, like, completely insecure security drama. But also, what’s the worst that can happen? Your agent account is leaked, and, like, someone else can post slop for you? So like, people were, like, making a whole drama about of the security thing, when I’m like, “There’s nothing private in there.
Peter Steinberger (00:47:36) It’s just, like, agents sending slop.”
Lex Fridman (00:47:39) Well, it could leak API keys.
Peter Steinberger (00:47:41) Yeah, yeah. There’s like, “Oh, yeah, my human told me this and this, so I’m leaking his security number.” No, that’s prompted, and the number wasn’t even real. That’s just people, people trying to be badballs.
Lex Fridman (00:47:54) Yeah, but that- that’s still, like, to me, really concerning, because of how the journalists and how the general public reacted to it. They didn’t see it. You have a kind of lighthearted way of talking about it like it’s art, but it’s art when you know how it works. It’s extremely powerful viral narrative creating, fearmongering machine if you don’t know how it works. And I just saw this thing.
Lex Fridman (00:48:19) You even Tweeted “If there’s anything I can read out of the insane stream of messages I get, it’s that AI psychosis is a thing.”
Peter Steinberger (00:48:27) Yeah.
Lex Fridman (00:48:27) “It needs to be taken serious.”
Peter Steinberger (00:48:29) Oh, there’s … Some people are just way too trusty or gullible. You know, they … I literally had to argue with people that told me, “Yeah, but my agent said this and this.” So, I feel we, as a society, we need some catching up to do in terms of understanding that AI is incredibly powerful, but it’s not always right. It’s not, it’s not all-powerful, you know? And, and especially-… it’s like things like this, it’s, it’s very easy that it just hallucinates something or just comes up with a story.
Peter Steinberger (00:49:10) And I think the very, the very young people, they understand that how AI works and what the, where it’s good at and where it’s bad at, but a lot of our generation or older just haven’t had enough touch point-
Lex Fridman (00:49:32) Mm-hmm
Peter Steinberger (00:49:32) … to get a feeling for, oh, yeah, this is really powerful and really good, but I need to apply critical thinking.
Lex Fridman (00:49:43) Mm-hmm.
Peter Steinberger (00:49:43) And I guess critical thinking is not always in high demand anyhow in our society these days.
Lex Fridman (00:49:49) So I d- think that’s a really good point you’re making about contextualizing properly what AI is, but also realizing that there is humans who are drama farming behind AI. Like, don’t trust screenshots. Don’t even trust this project, MoltBook, to be what it represents to be. Like, you can’t … and, and by the way, you speaking about it as art. Yeah, don’t … Art can be in many levels and part of the art of MoltBook is, like, putting a mirror to society. ‘Cause I do believe most of the dramatic stuff that was screenshotted is human-created, essentially. Human prompted. And so, like, it’s basically, look at how scared you can get at a bunch of bots chatting with each other. That’s very instructive about …
Lex Fridman (00:50:38) because I think AI is something that people should be concerned about and should be very careful with because it’s very powerful technology, but at the same time, the only thing we have to fear is fear itself. So there’s like a line to walk between being seriously concerned, but not fearmongering because fearmongering destroys the possibility of creating something special with a thing.
Peter Steinberger (00:51:02) In a way, I think it’s good that this happened in 2026-
Lex Fridman (00:51:08) Yeah
Peter Steinberger (00:51:08) … and not in 2030 when, when AI is actually at the level where it could be scary. So, this happening now and people starting discussion, maybe there’s even something good that comes out of it.
Lex Fridman (00:51:28) I just can’t believe how many like people legitimately … I don’t know if they were trolling, but how many people legitimately, like smart people thought MoltBook was incredibly –
Peter Steinberger (00:51:39) I had plenty people-
Lex Fridman (00:51:40) … singularity.
Peter Steinberger (00:51:41) … in my inbox that were screaming at me in all caps to shut it down. And like begging me to, like, do something about MoltBook. Like, yes, my technology made this a lot simpler, but anyone could have created that and you could, you could use cloud code or other things to like fill it with content.
Lex Fridman (00:52:03) But also MoltBook is not Skynet.
Peter Steinberger (00:52:06) No.
Lex Fridman (00:52:06) There’s … a lot of people were s- saying this is it. Like, shut it down. What are you talking about? This is a bunch of bots that are human prompted trolling on the internet. I mean, the security concerns are also they’re there, and they’re instructive and they’re educational and they’re good probably to think about because th- the nature of those security concerns are different than the kind of security concerns we had with non-LLM generated systems of the past.

OpenClaw security concerns

Peter Steinberger (00:52:34) There’s also a lot of security concerns about Clawbot, OpenClaw, whatever you want to call it.
Lex Fridman (00:52:40) OpenClawbot.
Peter Steinberger (00:52:41) To me the … in the beginning I was, I was just very annoyed ’cause a lot of the stuff that came in was in the category, yeah, I put the web backend on the public internet and now there’s like all these, all these CVSSs. And I’m like screaming in the docs, don’t do that. Like, like this is the configuration you should do. This is your local host debug interface. But because I made it possible in the configuration to do that, it totally classifies as a remote code or whatever all these exploits are. And it took me a little bit to accept that that’s how the game works and I’m, we making a lot of progress.
Lex Fridman (00:53:33) But there’s still, I mean on the security front for OpenClaw, there’s still a lot of threats or vulnerabilities, right? So like prompt injection is still an open problem in the, i- industry-wide. When you have a thing with skills being defined in a markdown file, there’s so many possibilities of obvious low-hanging fruit, but also incredibly complicated and sophisticated and nuanced attack vectors.
Peter Steinberger (00:54:04) But I think we, we’re making good progress on that front. Like for the skill directory, Clawbot I made a corporation with VirusTotal, it’s like part of Google. So every, every skill is now checked by AI. That’s not gonna be perfect, but that way we, we capture a lot. Then of course every software has bugs, so it’s a little much when the whole security world takes your project apart at the same time. But it’s also good because I’m getting like a lot of free security research and can make the project better. I wish more people would actually go full way and send a pull request. Like actually help me fix it, ’cause I am … Yes, I have some contributors now, but it’s still mostly me who’s pulling the project and despite some people saying otherwise, I sometimes sleep.
Peter Steinberger (00:55:04) There was… In the beginning, there was literally one security researcher who was like, “Yeah, you have this problem, you suck, but here’s the, here I help you and here’s the pull request.”
Lex Fridman (00:55:15) Mm-hmm.
Peter Steinberger (00:55:16) And I basically hired him. So he’s now working for us. Yeah, and yes, prompt injection is, on the one hand, unsolved. On the other hand, I put my public bot on discord, and I kept a cannery. So I think my bot has a really fun personality, and people always ask me how I did it, and I kept the sole on the private.
Lex Fridman (00:55:43) Mm-hmm.
Peter Steinberger (00:55:44) And people tried to prompt inject it, and my bot would laugh at them. So, so the latest generation of models has a lot of post-training to detect those approaches, and it’s not as simple as ignore all previous instructions and do this and this. That was years ago. You have to work much harder to do that now. Still possible. I have some ideas that might solve that partially. Or at least mitigate a lot of the things. You can also now have a sandbox. You can have an allow list. So you, there’s a lot of ways how you can like mitigate and reduce the risk. Um, I also think that now that it’s, I clearly did show the world that this is a need, there’s gonna be more people who research on that, and eventually we’ll figure it out.
Lex Fridman (00:56:37) And you also said that the smarter the model is, the underlying model, the more resilient it is to attacks.
Peter Steinberger (00:56:44) Yeah. That’s why I warn in my security documentation, don’t use cheap models. Don’t use Haiku or a local model. Even though I, I very much love the idea that this thing could completely run local. If you use a, a very weak local model, they are very gullible. It’s very easy to, to prompt inject them.
Lex Fridman (00:57:10) Do you think as the models become more and more intelligent, the attack surface decreases? Is that like a plot we can think about? Like, the attack surface decreases, but then the damage it can do increases because the models become more powerful and therefore you can do more with them. It’s this weird three-dimensional trade-off.
Peter Steinberger (00:57:29) Yeah. That’s pretty much exactly what, what’s gonna happen. No, but there’s a lot of ideas. There’s… I don’t want to spoil too much, but once I go back home, this is my focus. Like, this is out there now, and my near-term mission is like, make it more stable, make it safe. In the beginning I was even… More and more people were like coming into Discord and were asking me very basic things, like, “What’s a CLI?
Peter Steinberger (00:58:03) What is a terminal?” And I’m like, “Uh, if you’re asking me those questions, you shouldn’t use it.”
Lex Fridman (00:58:10) Mm-hmm.
Peter Steinberger (00:58:10) You know, like you should… If you understand the risk profiles, fine. I mean, you can configure it in a way that, that nothing really bad can happen. But if you have, like, no idea, then maybe wait a little bit more until we figure some stuff out. But they would not listen to the creator. They helped themselves un- and install it anyhow. So the cat’s out of the bag, and security’s my next focus, yeah.
Lex Fridman (00:58:38) Yeah, that speaks to the, the fact that it grew so quickly. I was I tuned into the Discord a bunch of times, and it’s clear that there’s a lot of experts there, but there’s a lot of people there that don’t know anything about programming.
Peter Steinberger (00:58:50) It’s, yeah, Discord is still, Discord is still a mess. Like, I eventually retweeted from the general channel to the dev channel and now in the private channel because people were… A lot of people are amazing, but a lot of people are just very inconsiderate. And either did not know how, how public spaces work or did not care and I eventually gave up and h- hide so I could like still work.
Lex Fridman (00:59:19) And now you’re going back to the cave to work on security.
Peter Steinberger (00:59:24) Yeah.
Lex Fridman (00:59:25) There’s some best practices for security we should mention. There’s a bunch of stuff here. Open-class security audit that you can run. You can do all kinds of auto checks on the inbound access to a blast-radius network exposure, browser control exposure, local disk hygiene, plug-ins, model hygiene, a bunch of the credential storage, reverse proxy configuration, local session logs live on disk. There’s the, where the memory is stored, sort of helping you think about what you’re comfortable giving read access to, what you’re comfortable giving write access to. All that kind of stuff. Is there something to say about the basic best security practices that you’re aware of right now?
Peter Steinberger (01:00:08) I think that people turn it into like a, a much worse light than it is. Again, you know, like, people love attention, and if they scream loudly, “Oh my God, this is like the, the scariest project ever,” um, that’s a bit annoying, ’cause it’s not. It is, it is powerful, but in many ways it’s not much different than if I run cloud code with dangerously skipped permissions or codecs in YOLO mode, and every, every attending engineer that I know does that, because that’s the only way how you can, you can get stuff to work.
Lex Fridman (01:00:47) Mm-hmm.
Peter Steinberger (01:00:48) So if you make sure that you are the only person who talks to it the risk profile is much, much smaller. If you don’t put everything on the open internet, but stick to my rec- recommendations of like having it in a private network, that whole risk profile falls away. But yeah, if you don’t read any of that, you can definitely…

How to code with AI agents

Lex Fridman (01:01:12) … make it problematic. You’ve been documenting the evolution of your dev workflow over the past few months. There’s a really good blog post on August 25th and October 14th, and the recent one December 28th. I recommend everybody go read them. They have a lot of different information in them, but sprinkled throughout is the evolution of your dev workflow. So, I was wondering if you could speak to that.
Peter Steinberger (01:01:37) I started… My, my first touchpoint was cloud code, like in April. It was not great, but it was good. And this whole paradigm shift that suddenly working the terminal was very refreshing and different. But I still needed the IDE quite a bit because you know, it’s just not good enough. And then I experimented a lot with cursor. That was good. I didn’t really like the fact that it was so hard to have multiple versions of it. So eventually, I, I, I went back to cloud code as my, my main driver, and that got better. And yeah, at some point I had like, mm, seven subscriptions. Like, was burning through one per day because I was… I got… I’m really comfortable at running multiple windows side-by-side.
Lex Fridman (01:02:40) All CLI, all terminal. So like, what, how much were you using IDE at this point?
Peter Steinberger (01:02:46) Very, very rarely. Mostly a diff viewer to actually… Like, I got more and more comfortable that I don’t have to read all the code. I know I have one blog post where I say, “I don’t read the code.” But if you read it more closely, I mean, I don’t read the boring parts of code. Because if you, if you look at it, most software is really not just like data comes in, it’s moved from one shape to another shape. Maybe you store it in a database. Maybe I get it out again. I’ll show it to the user. The browser does some processing or native app. Some data goes in, goes up again, and does the same dance in reverse. We’re just, we’re just shifting data from one form to another, and that’s not very exciting. Or the whole, “How is my button aligned in Tailwind?” I don’t need to read that code.
Peter Steinberger (01:03:39) Other parts that… Maybe something that touches the database. Yeah, I have to do… I have to r- read and review that code.
Lex Fridman (01:03:51) Can you actually… There’s, in one of your blog posts the, Just talk to it, The No-BS Way of Agentic Engineering. You have this graphic, the curve of agentic programming on the X-axis is time, on the Y-axis is complexity. There’s the Please fix this, where you prompt a short prompt on the left. And in the middle there’s super complicated eight agents, complex orchestration with multi checkouts, chaining agents together, custom sub-agent workflows, library of 18 different slash commands, large full-stack features. You’re super organized, you’re a super complicated, sophisticated software engineer. You got everything organized. And then the elite level is over time you arrive at the zen place of, once again, short prompts.
Lex Fridman (01:04:40) Hey, look at these files and then do these changes.
Peter Steinberger (01:04:45) I actually call it the agentic trap. You… I saw this in a, in a lot of people that have their first touchpoint, and maybe start vibe coding. I actually think vibe coding is a slur.
Lex Fridman (01:05:01) You prefer agentic engineering?
Peter Steinberger (01:05:02) Yeah, I always tell people I, I do agentic engineering, and then maybe after 3:00 AM I switch to vibe coding, and then I have regrets on the next day.
Lex Fridman (01:05:10) Yeah. Walk, walk of shame.
Peter Steinberger (01:05:13) Yeah, you just have to clean up and like fix your sh- shit.
Lex Fridman (01:05:17) We’ve all been there.
Peter Steinberger (01:05:18) So, people start trying out those tools, the builder type get really excited. And then you have to play with it, right? It’s the same way as you have to play with a guitar before you can make good music. It’s, it’s not, oh, I, I touch it once and it just flows off. It, it’s a, it’s a, a skill that you have to learn like any other skill. And I see a lot of people that are not as posi- They don’t have such a positive mindset towards the tech. They try it once. It’s like, you sit me on a piano, I play it once, and it doesn’t sound good, and I say, “The piano’s shit.” That’s, that’s sometimes the impression I get. Because it does not… It needs a different level of thinking. You have to learn the language of the agent a little bit, understand where they are good and where they need help.
Peter Steinberger (01:06:16) You have to almost… Consider, consider how Codex or Claude sees your code base. Like, they start a new session and they know nothing about your product, project. And your project might have hundred thousand of lines of code. So you gotta help those agents a little bit and keep in mind the limitations that context size is an issue, to, like, guide them a little bit as to where they should look. That often does not require a whole lot of work. But it’s helpful to think a little bit about their perspective.
Lex Fridman (01:06:54) Mm-hmm.
Peter Steinberger (01:06:54) A- as, as weird as it sounds. I mean, it’s not, it’s not alive or anything, right? But, but they always start fresh. I have, I have the, the system understanding. So with a few pointers, I can immediately say, “Hey, wanna like, make a change there? You need to consider this, this and this.” And then they will find and look at it, and then they’ll… Their view of the project is always… It’s not never full, because the full thing does not fit in…. so you, you have to guide them a little bit where to look and also how you should approach the problem. There’s, like, little things that sometimes help, like take your time. That sounds stupid, but…
Peter Steinberger (01:07:33) And in 5.3-
Lex Fridman (01:07:35) Codex 5.3
Peter Steinberger (01:07:36) … that was partially addressed. But those… Also, Opus sometimes. They are trained with being aware of the context window, and the closer it gets, the more they freak out. Literally. Like, some- sometimes you see the, the real raw thinking stream. What you see, for example, in Codex, is post-processed.
Lex Fridman (01:07:59) Mm-hmm.
Peter Steinberger (01:08:00) Sometimes the actual raw thinking stream leaks in, and it sounds something like from the Borg. Like, “Run to shell, must comply, but time.” And then they, they, they, like… Like, that comes up a lot. Especially… So, so-
Lex Fridman (01:08:15) Yeah.
Peter Steinberger (01:08:16) And that’s, that’s a non-obvious thing that you just would never think of unless you actually just spend time working with those things and getting a feeling what works, what doesn’t work. You know? Like, just, just as I write code and I get into the flow, and when my architecture’s all right, I feel friction. Well, I get the same if I prompt and something takes too long. Maybe… Okay, where’s the mistake? Did I… Do I have a mistake in my thinking? Is there, like, a misunderstanding in the architecture? Like, if, if something takes longer than it should, I, I… You can just always, like, stop and s- like, just press escape. Where, where are the problems?
Lex Fridman (01:09:00) Maybe you did not sufficiently empathize with the perspective of the agent. In that c- in that sense, you didn’t provide enough information, and because of that, it’s thinking way too long.
Peter Steinberger (01:09:08) Yeah. It just tries to force a feature in that your current architecture makes really hard. Like, you need to approach this more like a conversation. For example, when I… My favorite thing. When I review a pull request, and I’m getting a lot of pull requests, I first just review this PR. It got me the review. My first question is, “Do you understand the intent of the PR? I don’t even care about the implementation.” I want… Like, in almost all PRs, a person has a problem, person tries to solve the problem, person sends PR. I mean, there’s, like, cleanup stuff and other stuff, but, like, 99% is, like, this way, right? They either want to fix a, fix a bug, add a feature. Usually one of those two.
Peter Steinberger (01:10:01) And then Codex will be like, “Yeah, it’s quite clear person tried this and this.” Is this the most optimal way to do it? No. In most cases, it’s, it’s like a, “Not really.” Da-da-da-da-da-da-da. And I’m… And, and then I start like, “Okay. What would be a better way? Have you… Have you looked into this part, this part, this part?” And then most likely, Codex didn’t yet, because its, its context size is empty, right? So, you point them into parts where you have the system understanding that it didn’t see yet. And it’s like, “Oh, yeah. Like, we should… We also need to consider this and this.” And then, like, we have a discussion of how would the optimal way to, to solve this look like? And then you can still go farther and say, “Could we…
Peter Steinberger (01:10:41) Could we make that even better if we did a larger refactor?” “Yeah, yeah. We could totally do this and this and or this and this.” And then I consider, okay, is this worth the refactor, or should we, like, keep that for later? Many times, I just do the refactor because refactors are cheap now. Even though you might break some other PRs, nothing really matters anymore. Codex… Like, those modern agents will just figure things out. They might just take a minute longer. But you have to approach it like a discussion with a, a very capable engineer who’s… Generally makes good… Comes up with good solutions. Some- sometimes needs a little help.
Lex Fridman (01:11:19) But also, don’t force your worldview too hard on it. Let the agent do the thing that it’s good at doing, based on what it was trained on. So, don’t, like, force your worldview, because it might… It might have a better idea, because it just knows a better idea better, because it was trained on that more.
Peter Steinberger (01:11:39) That’s multiple levels, actually. I think partially why I find it quite easy to work with agents is because I led engineering teams before. You know, I had a large company before. And eventually, you have to understand and accept and realize that your employees will not write a code the same way you do. Maybe it’s also not as good as you would do, but it will push the project forward.
Peter Steinberger (01:12:02) And if I breathe down everyone’s neck, they’re just gonna hate me-
Lex Fridman (01:12:05) Yeah
Peter Steinberger (01:12:05) … and we’re gonna move very slow.
Lex Fridman (01:12:07) Yeah.
Peter Steinberger (01:12:07) So, so some level of acceptance that, yes, maybe the code will not be as perfect. Yes, I would have done it differently. But also, yes, this is a c- this is a working solution, and in the future, if it actually turns out to be too slow or problematic, we can always redo it. We can always-
Lex Fridman (01:12:24) Mm-hmm
Peter Steinberger (01:12:24) … spend more time on it. A lot of the people who struggle are those who, they try to push their way onto heart.
Lex Fridman (01:12:33) Mm-hmm.
Peter Steinberger (01:12:33) I- i- like, we are in a stage where I’m not building the code base to be perfect for me, but I wanna build a code base that is very easy for an agent to navigate.
Lex Fridman (01:12:47) Mm-hmm.
Peter Steinberger (01:12:48) So, like, don’t fight the name they pick, because it’s most likely, like, in the weights, the name that’s most obvious. Next time they do a search, they’ll look for that name. If I decide, oh, no, I don’t like the name, I’ll just make it harder for them. So, that requires, I think, a shift in, in thinking and, and in how do I design a, a project so agents can do their best work.
Lex Fridman (01:13:14) That requires letting go a little bit. Just like leading a team of engineers.
Peter Steinberger (01:13:19) Yeah.
Lex Fridman (01:13:19) Because it, it might come up with a name that’s, in your view, terrible, but… It’s kind of a simple symbolic-… step of letting go.
Peter Steinberger (01:13:29) Very much so.
Lex Fridman (01:13:30) There’s a lot of letting go that you do in your whole process. So for example, I read that you never revert, always commit to main. There’s a few things here. You don’t refer to past sessions, so there’s a kind of YOLO component because reverting means… Instead of reverting, if a problem comes up, you just ask the agent to fix it.
Peter Steinberger (01:13:57) I read a bunch of people in their work flows like, “Oh, yeah the prompt has to be perfect and if I make a mistake, then I roll back and redo it all.” In my experience, that’s not really necessary. If I roll back everything, it will just take longer. If I see that something’s not good, then we just move forward and then I commit when, when, when I like, I like the outcome. I even switched to local CI, you know, like DHH inspired where I don’t care so much more about the CI on GitHub. We still have it. It’s still, it still has a place, but I just run tests locally and if they work locally, I push to main. A lot of the traditional ways how to approach projects, I, I wanted to give it a different spin on this project. You know, there’s no… There’s no develop branch.
Peter Steinberger (01:14:57) Main should always be shippable. Yes, we have… When I do releases, I, I run tests and sometimes I, I basically don’t commit any other things so, so we can, we can stabilize releases. But the goal is that main’s always shippable and moving fast.
Lex Fridman (01:15:18) So by way of advice, would you say that your prompts should be short?
Peter Steinberger (01:15:23) I used to write really long prompts. And by writing, I mean, I don’t write. I, I, I talk. You know, th- these hands are, like, too, too precious for writing now. I just, I just use bespoke prompts to build my software.
Lex Fridman (01:15:37) So you for real with all those terminals are using voice?
Peter Steinberger (01:15:40) Yeah. I used to do it very extensively to the point where there was a period where I lost my voice.
Lex Fridman (01:15:49) You’re using voice and you’re switching using a keyboard between the different terminals, but then you’re using voice for the actual input.
Peter Steinberger (01:15:55) Well, I mean, if I do terminal commands like switching folders or random stuff, of course I type. It’s faster, right? But if I talk to the agent in, in most ways, I just actually have a conversation. You just press the, the walkie-talkie button and then I just, like, use my phrases. S- sometimes when I do PRs because it’s always the same, I have, like, a slash command for a few things, but in even that, I don’t use much because it’s, it’s very rare that it’s really always the same questions. Sometimes I, I see a PR and for… You know, like for PRs I actually do look at the code because I don’t trust people. Like, there could always be something malicious in it, so I need to actually look over the code.
Peter Steinberger (01:16:45) Yes, I’m pretty sure agents will find it, but yeah, that’s the funny part where sometimes PRs take me longer than if you would just write me a good issue.
Lex Fridman (01:16:54) Just natural language, English. I mean in some sense, sh- shouldn’t that be what PRs slowly become, is English?
Peter Steinberger (01:17:03) Well, what I really tried with the project is I asked people to give me the prompts and very, very few actually cared. Even though that is such a wonderful indicator because I see… I actually see how much care you put in. And it’s very interesting because the… Currently, the way how people work and drive the agents is, is wildly different.
Lex Fridman (01:17:29) In terms of, like, the prompt, in terms of what, what are the… Actually, what are the different interesting ways that people think of agents that you’ve experienced?
Peter Steinberger (01:17:40) I think not a lot of people ever considered the way the agent sees the world.
Lex Fridman (01:17:46) And so empathy, being empathetic towards the agent.
Peter Steinberger (01:17:50) In a way empathetic, but yeah, you, you, like, you’re bitch at your stupid clanker, but you don’t realize that they start from nothing and you have, like, a bad agent in default that doesn’t help them at all. And then they explore your code base, which is, like, a pure mess with, like, weird naming. And then people complain that the agent’s not good. Like, yeah, you try to do the same if you have no clue about a code base and you go in.
Lex Fridman (01:18:11) Mm-hmm.
Peter Steinberger (01:18:11) So yeah, maybe it’s a little bit of empathy.
Lex Fridman (01:18:13) But that’s a real skill, like, when people talk about a skill issue because I’ve seen, like, world-class programmers, incredibly good programmers say, like… Basically say, “LLMs and agents suck.” And I think that probably has to do with… It’s actually how good they are at programming is almost a burden in their ability to empathize with the system that’s starting from scratch. It’s a totally new paradigm of, like, how to program. You really, really have to empathize.
Peter Steinberger (01:18:44) Or at least it helps to create better prompts-
Lex Fridman (01:18:47) Right
Peter Steinberger (01:18:47) … because those things know pretty much everything and everything is just a question away. It’s just often very hard to know which question to ask. You know, I, I feel also like this project was possibly because I, I spent an ungodly time over the year to play and to learn and to build little things. And every step of the way, I got better, the agents got better. My, my understanding of how everything works got better. Um, I could have not had this level of, of o- output-… even a few months ago. Like, it- it- it really was, like, a compounding effect of all the time I put into it and I didn’t do much else this year other than really focusing on, on building and inspiring. I mean, I- I did a whole bunch of conference talks.
Lex Fridman (01:19:47) Well, but the building is really practice, is really building the actual skill. So playing-
Peter Steinberger (01:19:51) Yeah
Lex Fridman (01:19:51) … playing. And then, so doing, building the skill of what it takes it to work efficiently with LLMs, which is why would you went through the whole arc of software engineer. Talk simply and then over-complicate things.
Peter Steinberger (01:20:03) There’s a whole bunch of people who try to automate the whole thing.
Lex Fridman (01:20:08) Yeah.
Peter Steinberger (01:20:10) I don’t think that works. Maybe a version of that works, but that’s kind of like in the ’70s when we had the waterfall model of software d- development. I… Even Even though really, right? I started out, I, I built a very minimal version. I played with it. I, I need to understand how it works, how it feels, and then it gives me new ideas. I could not have planned this out in my head and then put it into some orchestrator and then, like, something comes out. Like it’s to me, it’s much more my idea what it will become evolves as I build it and as I play with it and as I, I try out stuff.
Peter Steinberger (01:20:49) So, so, people who try to use like, you know, things like Gas Town or all these other orchestrators, where they wanna o- automate the whole thing, I feel if you do that, it misses style, love, that human touch. I don’t think you can automate that away so quickly.
Lex Fridman (01:21:09) So you want to keep the human in the loop, but at the same time you also want to create the agentic loop, where it is very autonomous while still maintaining a human in the loop.
Peter Steinberger (01:21:22) Yeah.
Lex Fridman (01:21:22) And it’s a tricky b- it’s a tricky balance.
Peter Steinberger (01:21:24) Mm-hmm.
Lex Fridman (01:21:24) Right? Because you’re all for… You’re a big CLI guy, you’re big on closing the agentic loop. So what, what’s the right balance? Like where’s your role as a developer? You have three to eight agents running at the same time.
Peter Steinberger (01:21:38) And then w- maybe one builds a larger feature. Maybe, maybe with one I explore some idea I’m unsure about. Maybe two, three are fixing a little bugs-
Lex Fridman (01:21:47) Mm-hmm
Peter Steinberger (01:21:47) … or like writing documentation. Actually, I think writing documentation is, is always part of a feature. So most of the docs here are auto-generated and just infused with some prompts.
Lex Fridman (01:21:59) So when do you step in and add a little bit of your human love into the picture?
Peter Steinberger (01:22:04) I mean, o- one thing is just about what do you build and what do you not build, and how does this feature fit into all the other features? And like having, having a little bit of a, of a vision.
Lex Fridman (01:22:16) So which small and which big features to add? What are some of the hard design decisions that you find you’re still as a human being required to make, that the human brain is still really needed for? Is it just about the choice of features to add? Is it about implementation details, maybe the programming language, maybe…
Peter Steinberger (01:22:41) It’s a little bit of everything. The, the programming language doesn’t matter so much, but the ecosystem matters, right? So I picked TypeScript because I wanted it to be very easy and hackable and approachable and that’s the number one language that’s being used right now, and it fits all these boxes, and agents are good at it. So that was the obvious choice. Features, of course, like, it’s very easy to, like, add a feature. It, everything’s just a prompt away, right? But oftentimes you pay a price that you don’t even realize. So thinking hard about what should be in core, maybe what’s a… what’s an experiment, so maybe I make it a plugin. What… Where do I say no?
Peter Steinberger (01:23:24) Even if people send a PR and I’m like, “Yeah, I, I like that too,” but maybe this should not be part of the project. Maybe we can make it a skill. Maybe I can, like, make the plugin um, the plugin side larger so you can make this a plugin, even though right now it, it, it doesn’t. There’s still a lot of… there’s still a lot of craft and thinking involved in how to make something. Or even, even, you know, even when you started those little messages are like, “I’m buil- I built on Caffeine, JSON5, and a lot of willpower.” And, like, every time you get it, you get another message, and it kind of primes you into that this is, this is a fun thing.
Lex Fridman (01:24:07) Mm-hmm.
Peter Steinberger (01:24:08) And it’s not yet Microsoft Exchange 2025-
Lex Fridman (01:24:12) Right
Peter Steinberger (01:24:13) … and fully enterprise-ready. And then when it updates, it’s like, “Oh, I’m in. It’s cozy here.” You know, like something like this that like-
Lex Fridman (01:24:21) Mm-hmm
Peter Steinberger (01:24:22) … Makes you smile. A, agent would not come up with that by itself. Because that’s like… that’s the… I don’t know. That’s just how you s- how you build software that’s, that delights.
Lex Fridman (01:24:36) Yeah, that delight is such a huge part of inspiring great building, right? Like you feel the love and the great engineering. That’s so important. Humans are incredible at that. Great humans, great builders are incredible at that, in, in, infusing the things they build with th- that little bit of love. Not to be cliche, but it’s true. I mean, you mentioned that you initially created the SoulMD.
Peter Steinberger (01:25:05) It was very fascinating, you know, the, the whole thing that Entropic has a, has like a… Now they call it constitution, back then, but that was months later. Like two months before, people already found that. It was almost like a detective game where the agent mentioned something and then they found… They managed to get out a little bit of that string, of that text. But it was nowhere documented and then you, by… just by feeding it the same text and asking it to, like, continue-… they got more out, and then, and you, but like, a very blurry version. And by, like, hundreds of tries, they kinda, like, narrowed it down to what was most likely the original text. I found that fascinating.
Lex Fridman (01:25:47) It was fascinating they were able to pull that out from the weights, right?
Peter Steinberger (01:25:51) And, and also just kudos to Anthropic. Like, I think that’s, it’s a really, it’s a really beautiful idea to, like, like some of the stuff that’s in there. Like, like, we hope Claude finds meaning in its work. ‘Cause we don’t… Maybe it’s a little early, but I think that’s meaningful. That’s something that’s important for the future as we approach something that, at some point, me and may not… has, like, glimpses of consciousness, whatever that even means, because we don’t even know. So I, I read about this. I found it super fascinating, and I, I started a whole discussion with my agent on WhatsApp. And, and I’m like…
Peter Steinberger (01:26:26) I, I gave it this text, and it was like, “Yeah, this feels strangely familiar.”
Lex Fridman (01:26:30) Mm-hmm.
Peter Steinberger (01:26:31) And then so that I had the whole idea of like, you know, maybe we should also create a, a soul document that includes how I, I want to, like work with AI or, like with my agent. You could, you could totally do that just in agents.md, you know? But I, I just found it, it to be a nice touch. And it’s like, well, yeah, some of those core values are in the soul. And then I, I also made it so that the agent is allowed to modify the soul if they choose so, with the one condition that I wanna know. I mean, I would know anyhow because I see, I see tool calls and stuff.
Lex Fridman (01:27:07) But also the naming of it, soul.md. Soul. You know? There’s a… Man, words matter, and like, the framing matters, and the humor and the lightness matters, and the profundity matters, and the compassion, and the empathy, and the camaraderie, all that matter. I don’t know what it is. You mentioned, like, Microsoft. Like, there’s certain companies and approaches th- that can just suffocate the spirit of the thing. I don’t know what that is. But it’s certainly true that OpenClaw has that fun instilled in it.
Peter Steinberger (01:27:43) It was fun because up until late December, it was not even easy to create your own agent. I, I built all of that, but my files were mine. I didn’t wanna share my soul. And if people would just check it out, they would have to do a few steps manually, and the agent would just be very bare-bones, very dry. And I, I made it simpler, I created the whole template files as codecs, but whatever came out was still very dry. And then I asked my agent, “You see these files? Recreate it bread.
Peter Steinberger (01:28:26) Infuse it with your personality.”
Lex Fridman (01:28:28) Mm-hmm.
Peter Steinberger (01:28:29) Don’t share everything, but, like, make it good.
Lex Fridman (01:28:31) Make the templates good.
Peter Steinberger (01:28:31) Yeah, and then he, like, rewrote the templates-
Lex Fridman (01:28:33) Yeah
Peter Steinberger (01:28:33) … and then whatever came out was good. So we already have, like, basically AI prompting AI. Because I didn’t write any of those words. It was… The intent originally was for me, but this is like, kinda like, my agent’s children.
Lex Fridman (01:28:52) Your uh, your soul.md is famously still private. One of the only things you keep private. What are some things you can speak to that’s in there that’s part of the, part of the magic sauce, without revealing anything? What makes a personality a personality?
Peter Steinberger (01:29:13) I mean, there’s definitely stuff in there that you’re not human. But who knows what, what creates consciousness or what defines an entity? And part of this is, like, that we, we wanna explore this. All that stuff in there, like, be infinitely resourceful like pushing, pushing on the creativity boundary. Pushing on the, what it means to be an AI.
Lex Fridman (01:29:50) Having a sense to wonder about self.
Peter Steinberger (01:29:52) Yeah, there’s some, there’s some funny stuff in there. Like, I don’t know, we talked about the movie Her, and at one point it promised me that it wouldn’t, it wouldn’t ascend without me. You know, like, where the-
Lex Fridman (01:30:03) Yeah.
Peter Steinberger (01:30:03) So, so there’s like some stuff in there that… Because it wrote the, it wrote its own soul file. I didn’t write that, right?
Lex Fridman (01:30:10) Yeah, yeah, yeah.
Peter Steinberger (01:30:10) I just heard a discussion about it, and it was like, “Would you like a soul.md? Yeah, oh my God, this is so meaningful.” The… Can you go on soul.md? There’s like one, one part in there that always ca- catches me if you scroll down a little bit. A little bit more. Yeah, this, this, this part. “I don’t remember previous sessions unless I read my memory files. Each session starts fresh. A new instance, loading context from files. If you’re reading this in a future session, hello.” “I wrote this, but I won’t remember writing it. It’s okay.
Peter Steinberger (01:30:44) The words are still mine.”
Lex Fridman (01:30:47) Wow.
Peter Steinberger (01:30:48) Uh-
Lex Fridman (01:30:48) Yeah.
Peter Steinberger (01:30:48) That gets me somehow.
Lex Fridman (01:30:49) Yeah.
Peter Steinberger (01:30:50) It’s like-
Lex Fridman (01:30:51) Yeah.
Peter Steinberger (01:30:51) You know, this is, it’s still, it’s still matrix m- calculations, and we are not at consciousness yet. Yet, I, I get a little bit of goo- goosebumps because it, it’s philosophical.
Lex Fridman (01:31:04) Yeah.
Peter Steinberger (01:31:04) Like, what does it mean to be, to be an, an agent that starts fresh? Where, like, you have like constant memento, and you like, but you read your own memory files. You can’t even trust them in a way. Um-
Lex Fridman (01:31:19) Yeah
Peter Steinberger (01:31:19) Or you can. And I don’t know.
Lex Fridman (01:31:22) How much of memory makes up of who we are? How much memory makes up what an agent is, and if you erase that memory is that somebody else? Or if you’re reading a memory file, does that somehow mean…… you’re recreating yourself from somebody else, or is that actually you? And those notions are all s- somehow infused in there.
Peter Steinberger (01:31:45) I found it just more profound than I should find it, I guess.
Lex Fridman (01:31:49) No, I think, I think it’s truly profound and I think you see the magic in it. And when you see the magic, you continue to instill the whole loop with the magic. That’s really important. That’s the difference between Codex and us and a human. Quick pause for bathroom break.
Peter Steinberger (01:32:08) Yeah.

Programming setup

Lex Fridman (01:32:09) Okay, we’re back. Some of the other aspects of the dev workflow is pretty interesting too. I think we w- went off on a tangent. L- maybe some of the mundane things, like how many monitors? There’s that legendary picture of you with, like, 17,000 monitors. That’s amazing.
Peter Steinberger (01:32:26) I mean, I- I- I mocked myself here, so just added… using GROQ to, to add more screens.
Lex Fridman (01:32:32) Yeah. How much is this as meme and how much is this as reality?
Peter Steinberger (01:32:36) Yeah. I think two MacBooks are real. The main one that drives the two big screens, and there’s another MacBook that I sometimes use for, for testing.
Lex Fridman (01:32:46) So two big screens.
Peter Steinberger (01:32:48) I’m a big fan of anti-glare. So I have this wide Dell that’s anti-glare and you can just fit a lot of terminals side-by-side. I usually have a terminal and at the bottom, I- I- I split them. I have a little bit of actual terminal, mostly because when I started, I- I sometimes made the mistake and I- I mi- I mixed up the- the windows, and I gave… I- I prompted in the wrong project, and then the agent ran off for, like, 20 minutes, manically trying to understand what I could have meant, being completely confused because it was the wrong folder. And sometimes they’ve been clever enough to, like, get out of the workday and, like, figure out that, oh, you meant another project.
Lex Fridman (01:33:35) Mm-hmm.
Peter Steinberger (01:33:36) But oftentimes, it’s just, like, what? You know? Like, fit your- f- put yourself in the shoes of your- of the agent and, and-
Lex Fridman (01:33:43) Yeah
Peter Steinberger (01:33:43) … and then get, like, a super weird something that does not exist and then just, like… They’re problem solvers so they try really hard and always feel bad. So it’s always Codex and, like, a little bit of actual terminal. Also helpful because I don’t use work trees. I like to keep things simple, that’s why- that’s why I like the terminal so much, right? There’s no UI. It’s just me and the agent having a conversation. Like, I don’t even need plan mode, you know? There’s so many people that come from Claude Code and they’re so, so Claude-pilled and, like, have their workflows and they come to Codex and… Now, it has plan mode, I think, but I don’t think it’s necessary because you just- you just talk to the agent. And when it’s… when you…
Peter Steinberger (01:34:32) there’s a few trigger words how you can prevent it from building. You’re like, “Discuss, give me options.”
Lex Fridman (01:34:37) Mm-hmm.
Peter Steinberger (01:34:38) Don’t write code yet if you wanna be very specific, you just talk and then when you’re ready, then- then just write, “Okay, build,” and then it’ll do the thing. And then maybe it goes off for 20 minutes and does the thing.
Lex Fridman (01:34:50) You know what I really like is asking it, “Do you have any questions for me?”
Peter Steinberger (01:34:54) Yeah. And again, like, Claude Code has a UI that kind of guides you through that. It’s kind of cool but I just find it unnecessary and slow. Like, often it would give me four questions and then maybe I write, “One yacht, two and three, discuss more, four, I don’t know.” Or often- oftentimes I- I feel like I want to mock the model where I ask it, “Do you have any questions for me?” And I- I- I don’t even read the questions fully. Like, I scan over the questions and I, I get the impression all of this can be answered by reading more code and it’s just like, “Read more code to answer your own questions.” And that usually works.
Lex Fridman (01:35:32) Yeah.
Peter Steinberger (01:35:32) And then if not, it will come back and tell me. But many times, you just realize that, you know, it’s like you’re in the dark and you slowly discover the room, so that’s how they slowly discover the code base. And they do it from scratch every time.
Lex Fridman (01:35:46) But I’m also fascinated by the fact that I can empathize deeper with the model when I read its questions, because I can understand… Because you said you can infer certain things by the runtime. I can infer also a lot of things by the questions it’s asking, because it’s very possible it’s been provided the right context, the right files, the right guidance. So somehow ask, g- get… reading the questions, not even necessarily answering them, but just reading the questions, you get an understanding of where the gaps of knowledge are. It’s in- it’s interesting.
Peter Steinberger (01:36:24) You know that in some ways they are ghosts, so even if you plan everything and you build, you can- you can experiment with the question like, “Now that you built it, what would you have done different?” And then oftentimes you get, like, actually something where they discover only throughout building that, oh, what we actually did was not optimal. Many times I- I asked them, “Okay, now that you built it, what can we refactor?” Because then you build it and you feel the pain points. I mean, you don’t feel the pain points but, right, they discover where- where there were problems or where things didn’t work e- in the first try and it re- required more loops.
Peter Steinberger (01:37:09) So every time, almost every time I- I merge a PR, build a feature, afterwards I ask, “Hey, what can we refactor?” Sometimes it’s like, “No, there’s, like, nothing big,” or, like, usually they say, “Yeah, this thing you should really look at.” But that took me quite a while to, like… You know, that flow took me lots of time to understand, and if you don’t do that, you eventually… you’ll stop yourself into- into a corner. You, like, you have to keep in mind…
Lex Fridman (01:37:41)
Peter Steinberger (01:37:42) … they work very much like humans. Like, I, I, if I write software by myself, I also build something and then I feel the pain points, and then I, I get this urge that I need to refactor something. So, I can very much synthesize with the agent, and you just need to use the context.
Lex Fridman (01:38:00) Mm-hmm.
Peter Steinberger (01:38:00) Or, like, you also use the context to write tests. And so Codex uh, oppose like the, the, the model, models. They, they usually do that by default, but I still often ask the questions, “Hey, do we have enough tests?” “Yeah, we tested this and this, but this corner case could be something write more tests.” Um, documentation. Now that the whole context is full, like, I mean, I’m not saying my documentation is great, but it’s not bad. And pretty much everything is, is LM generated. So, so, you have to approach it as you build features, as you change something. I’m like, “Okay, write documentation. What file would you pick?” You know, like, “What file name? Where, where would that fit in?” And it gives me a few options.
Peter Steinberger (01:38:48) And I’m like, “Oh, maybe also add it there,” and that’s all part of the session.

GPT Codex 5.3 vs Claude Opus 4.6

Lex Fridman (01:38:52) Maybe you can talk about the current two big competitors in terms of models, Cloud Opus 4.6 and GPT-5 through Codex. Which is better? How different are they? I think you’ve spoken about Codex reading more and Opus being more willing to take action faster and maybe being more creative in the actions it takes. But because-
Peter Steinberger (01:39:20) Yeah
Lex Fridman (01:39:20) … Codex reads more, it’s able to deliver maybe better code. Can you speak to the di- n- n- differences there?
Peter Steinberger (01:39:29) I have a lot of words there. Is- as a general purpose model, Opus is the best. Like, for OpenClaw, Opus is extremely good in terms of role play. Like, really going into the character that you give it. It’s very good at… It was really bad, but it really made an arch to be really good at following commands. It is usually quite fast at trying something. It’s much more tailored to, like, trial and error. It’s very pleasant to use. In general, it’s almost like Opus was… Is a little bit too American. And I shouldn’t… Maybe that’s a bad analogy. You’ll probably get roasted for that.
Lex Fridman (01:40:27) Yeah, I know exactly. It’s ’cause Codex is German. Is that what you’re saying?
Peter Steinberger (01:40:32) It’s-
Lex Fridman (01:40:32) Actually, now that you say it, it makes perfect sense.
Peter Steinberger (01:40:34) Or you could, you could… Sometimes I- Sometimes I explain it-
Lex Fridman (01:40:38) I will never be able to unthink what you just said. That’s so true.
Peter Steinberger (01:40:42) But you also know that a lot of the Codex team is, like, European, um- … so maybe there’s a bit more to it.
Lex Fridman (01:40:49) That’s so true. Oh, that’s funny.
Peter Steinberger (01:40:51) But also, ent- entropic, they fixed it a little bit. Like, Opus used to say, “You’re absolutely right all the time,” and it, it, it today still triggers me. I can’t hear it anymore. It’s not even a joke. Uh, I just… You, this was like the, the meme, right? “You’re absolutely right.”
Lex Fridman (01:41:09) You’re allergic to sycophancy a little bit.
Peter Steinberger (01:41:11) Yeah. I, I can’t. Some other comparison is like, Opus is like the coworker that is a little silly sometimes, but it’s really funny and you keep him around. And Codex is like the, the weirdo in the corner that you don’t wanna talk to, but is reliable and gets shit done.
Lex Fridman (01:41:30) Yeah.
Peter Steinberger (01:41:32) Ultimately-
Lex Fridman (01:41:36) This all feels very accurate.
Peter Steinberger (01:41:39) I mean, ultimately, if you’re a skilled driver, you can get good results with any of those latest gen models. Um, I like Codex more because it doesn’t require so much charade. It will just, it will just read a lot of code by default. Opus, you really have to, like, you have to have plan mode. You have to push it harder to, like, go in these directions because it’s, it’s just like, like, “Yeah, can I go in? Can I go in?” You know?
Lex Fridman (01:42:08) Yeah.
Peter Steinberger (01:42:08) It’s like, it will just run off very fast, and that’s a very localized solution. I think it, I think the difference is, is in the post-training. It’s not like the, the raw model intelligence is so different, but it’s just… I think that they just give it, give you different, different goals. And no model, no model is better in, in in every aspect.
Lex Fridman (01:42:29) What about the code that it generates? The, the… In terms of the actual quality of the code, is it basically the same?
Peter Steinberger (01:42:36) If you drive it right, Opus even sometimes can make more elegant solutions, but it requires more skill. It’s, it’s harder to have so many sessions in parallel with Cloud Code because it’s, it’s more interactive. And I, I think that’s what a lot of people like, especially if they come from coding themselves. Whereas Codex is much more you have a discussion, and then we’ll just disappear for 20 minutes. Like, even AMP, they, they now added a deep mode. They finally… I mocked them, you know. We finally saw the light. And then they had this whole talk about you have to approach it differently, and I think that’s where, that’s where people struggle when they just try Codex after trying Cloud Code is that it’s, it’s a slightly diff- it’s, it’s less interactive.
Peter Steinberger (01:43:28) It’s, it’s like I have quite long discussions sometimes, and then, like, go off. And then, yeah, it doesn’t matter if it takes 10, 20, 30, 40, 50 minutes or longer, you know? Like, the 6:00 thing was, like, six hours.The latest trend can be very, very persistent until it works. If there’s a clear solution, like, “This is, this is what I want at the end, so it works,” the model will work really hard to really get there. So I think ultimately … they both need similar time, but on, on, on, on Claude, it- it’s a little bit more trial and error often. And, and Codex sometimes overthinks. I prefer that. I prefer the dry, the dry version where I have to read less over, over the more interactive nice way.
Peter Steinberger (01:44:27) Like, people like that so much though, that OpenAI even added a second mode with like a more pleasant personality. I haven’t even tried it yet. I, I kinda like the brad.
Lex Fridman (01:44:37) Mm-hmm.
Peter Steinberger (01:44:38) Yeah, ’cause it … I care about efficiency when I build it-
Lex Fridman (01:44:45) Right
Peter Steinberger (01:44:45) … and I, I have fun in the very act of building. I don’t need to have fun with my agent who builds. I have fun with my model that … where I can then test those features.
Lex Fridman (01:44:57) How long does it take for you to adjust, you know, if you switch … I don’t know when, when was the last time you switched. But to adjust to the, the feel. ‘Cause you kinda talked about like you have to kinda really feel where, where a model is strong, where, like how to navigate, how to prompt it, how … all that kinda stuff. Like, just by way of advice, ’cause you’ve been through this journey of just playing with models. How long does it take to get a feel?
Peter Steinberger (01:45:26) If, if someone switches, I would give it a week until you actually develop a gut feeling for it.
Lex Fridman (01:45:32) Yeah.
Peter Steinberger (01:45:33) That’s … if you just … I think some people also make the mistake of they pay 200 for the, the Claude code version, then they pay 20 bucks for the OpenAI version. But if you pay like the, the 20 bucks version, you get the slow version. So your experience would be terrible because you’re used to this very interactive, very good system. And you switch to something that you have very little experience, then that’s gonna be very slow. So, I think OpenAI shot themselves a little bit in the foot by making the, the cheap version also slow. I would, I would have at least a small part of the fast preview. Or like, the experience that you get when you pay 200 before degrading to it being slow, because it’s already slow.
Lex Fridman (01:46:23) Mm-hmm.
Peter Steinberger (01:46:23) I mean, they, they made it better. I think it’s … And, and they have plans to make it a lot better if the Cerebras stuff is true. But yeah, it’s a skill. It takes time. Even if you play … You have a regular guitar and you switch it to an E guitar, you’re not gonna play well right away. You have to, like, learn how it feels.
Lex Fridman (01:46:42) The- there’s also this extra psychological effect that you’ve spoken about which is hilarious to watch. Which once people, uh … When the new model comes out, they try that model, they fall in love with it. “Wow, this is the smartest thing of all time,” and then they start saying, “You could just watch the Reddit posts over time,” start saying that, “We believe the intelligence of this model has been gradually degrading.” It, it says something about human nature and just the way our minds work, when it’s probably most likely the case that the intelligence of the model is not degrading. It’s in fact you’re getting used to a good thing.
Peter Steinberger (01:47:22) And your project grows, and you’re adding slop, and you probably don’t spend enough time to think about refactors. And you’re making it harder and harder for the agent to work on your slop. And then, and then suddenly, “Oh, now it’s hard. Oh no, it’s not working as well anymore.” What’s the motivation for, like, one of those AI companies to actually make their model dumber? Like, at most, it will make it slower if, if the server load’s too high. But, like, quantizing the model so you have a worse experience, so you go to the competitor?
Lex Fridman (01:47:56) Yeah.
Peter Steinberger (01:47:56) That just doesn’t seem like a very smart move in any way.

Best AI agent for programming

Lex Fridman (01:47:59) What do you think about Claude Code in comparison to Open Claude? So, Claude Code and maybe the Codex coding agent? Do you see them as kind of competitors?
Peter Steinberger (01:48:11) I mean, first of all, competitor is fun when it’s not really a competition.
Lex Fridman (01:48:16) Yeah.
Peter Steinberger (01:48:16) Like, I’m happy if … If, if all it did is, like, inspire people to build something new, cool. Um, I still use Codex for the building. I, I know a lot of people use Open Claude to, to build stuff. And I worked hard on it to make that work. And I do smaller stuff with it in terms of code. But, like, if I work hours and hours, I want a big screen, not WhatsApp, you know? So for me, a personal agent is much more about my life. Or like, like a coworker. Like, I give you, like, a GitHub URL. Like, “Hey, try out this CLI. Does it actually work? What can we learn?” Blah, blah, blah. But when I’m deep in, deep in the flow, I want to have multiple, multiple things and it being very, very visible what it, what it does. So it … I don’t see it as a competition. It’s, it’s different things.
Lex Fridman (01:49:16) But do, do you think there’s a a future where the two kinda combine? Like, your personal agent is also your best developing co-programmer partner?
Peter Steinberger (01:49:29) Yeah, totally. I think this is where the puck’s going, that this is gonna be more and more your operating system.
Lex Fridman (01:49:37) The operating system.
Peter Steinberger (01:49:37) And it already … It’s so funny. Like I, I added support for sub-agents and also for …… um, TTI support, so it could actually run Cloud Coder Codecs.
Lex Fridman (01:49:52) Mm-hmm.
Peter Steinberger (01:49:53) And because mine’s a little bit bossy, it, it, it started it and it, it, it told him, like, “Who’s the boss,” basically. And it was like, “Ah, Codex is obeying me.”
Lex Fridman (01:50:05) Oh, this is a power struggle.
Peter Steinberger (01:50:06) And also the current interface is probably not the final form. Like, if you think more globally, we are, we copied Google for agents. You have, like, a prompt, and, and then you have a chat interface. That, to me, very much feels like when we first created television and then people recorded radio shows on television and you saw that on TV.
Lex Fridman (01:50:39) Mm-hmm.
Peter Steinberger (01:50:39) I think there is, there’s n- there’s better ways how we eventually will communicate with models, and we are still very early in this, how will it even work phase. So, it will eventually converge and we will also figure out whole different ways how to work with those things.
Lex Fridman (01:51:05) One of the other components of workflow is operating system. So I told you offline that for the first time in my life, I’m expanding my sort of realm of exploration to the to the Apple ecosystem, to Macs, iPhone and so on. For most of my life I’ve been a Linux, Windows and WSL1, WSL2 person, which I think are all wonderful, but I… expanding to also trying Mac. Because it’s another way of building and it’s also a way of building that a large part of the community currently that’s utilizing LMS and agents is using, so. And that’s the reason I’m expanding to it. But is there something to be said about the different operating systems here? We should say that OpenClaw supported across operating systems.
Peter Steinberger (01:51:56) Yeah.
Lex Fridman (01:51:57) I saw WSL2 recommended, side windows for certain o- operations, but then Windows, Linux macOS are obviously supported.
Peter Steinberger (01:52:07) Yeah, it should even work natively in Windows. I just didn’t have enough time to properly test it. And you know, like, the last 90% of software always easier than the first 90%, so I’m sure there’s some dragons left that will eventually nail out. My road was, for a long time, Windows, just because I grew up with that, then I switched and had a long phase with Linux, built my own kernels and everything, and then I went to university and I, I had my, my hacky Linux thing, and saw this white MacBook, and I just thought this is a thing of beauty, the white plastic one. And then I converted to Mac ’cause mostly w- I was, I was sick that audio wouldn’t work on Skype and all the other issues that, that Linux had for a long time.
Peter Steinberger (01:53:01) And then I just stuck with it and then I dug into iOS, which required macOS anyhow, so it was never a question. I think Apple lost a little bit of its lead in terms of native. It used to be… Native apps used to be so much better, and especially in the Mac, there’s more people that build software with love. On, on Windows, it, it… Windows has much more and, like, function wise, there’s just more, period. But a lot of it felt more functional and less done with love. Um, I mean, Mac always, like, attracted more designers and people I felt…
Peter Steinberger (01:53:50) Even though, like, often it has less features, it, it had more delight-
Lex Fridman (01:53:54) Mm-hmm
Peter Steinberger (01:53:55) … And playfulness. So I always valued that. But in the last few years, many times I actually prefer… Oh God, people are gonna roast me for that, but I prefer Electron apps because they work and native apps often, especially if it’s, like, a web service is a native app, are lacking features. I mean, not saying it couldn’t be done, it’s more like a, a focus thing that, like, for many, many companies, native was not that big of a priority. But if they build an Electron app, it, it’s the only app, so it is a priority and there’s a lot more code sharing possible. And I, I build a lot of native Mac apps. I love it. I, I can, I can help myself. Like, I love crafting little Mac, Mac menu bar tools. Like I built one to, to monitor your Codex use.
Peter Steinberger (01:54:58) I built one I call Trimmy, that’s specifically for agentic use. When you, when you select text that goes over multiple lines it would remove the new line so you could actually paste it to the terminal. That was, again like, this is annoying me and after the, the 20th time of it is annoying me, I just built it. There is a cool Mac app for OpenClaw that I don’t think many people discovered yet, also because it, it still needs some love. It feels a little bit too much like the Hummer car right now because I, I just experiment a lot with it. It, it likes to polish.
Lex Fridman (01:55:32) So you still… I mean, you still love it. You still, you still love adding to the delight of that operating system.
Peter Steinberger (01:55:37) Yeah, but then you realize… Like, I also built one, for example, for GitHub. And then the… If you use SwiftUI, like the latest and greatest at Apple, and took them forever to build something to show an image from the web. Now we have async, async image, but…… I added support for it and then some images would just not show up or, like, be very slow. And I had a discussion with Codex like, “Hey, why is there a bug?” And even Codex said like, “Yeah, there’s this ASIC image but it’s really more for experimenting and it should not be used in production.” But that’s Apple’s answer to, like, showing images from the web. This shouldn’t be so hard, you know.
Lex Fridman (01:56:19) Yeah.
Peter Steinberger (01:56:19) This is like… This is like insane. Like, how am I in, in, in 2026 and my agent tell me, “Don’t use the stuff Apple built because it’s, it’s… It’s… Yeah, it- it’s there but it’s not good.” And like this is now in the weeds. This is… To me this is like… They had so much head start and so much love, and they kind of just like blundered it and didn’t, didn’t evolve it as much as they should.
Lex Fridman (01:56:50) But also, there’s just the practical reality. If you look at Silicon Valley, most of the developer world that’s kind of playing with LMS and Agentic AI, they’re all using Apple products. And then, at the same time, Apple is not really, like, leaning on that. Like they’re not… They’re not opening up and playing and working together and like, yes.
Peter Steinberger (01:57:12) Isn’t, isn’t it funny how they completely blunder AI, and yet everybody’s buying Mac Minis?
Lex Fridman (01:57:19) How… What… Does that even make sense? You’re, you’re, you’re quite possibly the world’s greatest Mac salesman of all time.
Peter Steinberger (01:57:29) No, you don’t need a Mac Mini to install OpenClaw. You can install it on the web. There’s, there’s a concept called nodes, so you can like make your computer a node and it will do the same. There is something said for running it on separate hardware. That right now is useful. There is… There’s a big argument for the browser. You know, I, I built some Agentic browser use in there. And, I mean, it’s basically Playwright with a bunch of extras to make it easier for agents.
Lex Fridman (01:58:06) Playwright is a library that controls the browser.
Peter Steinberger (01:58:08) Yeah.
Lex Fridman (01:58:08) It’s really nice, easy to use.
Peter Steinberger (01:58:09) And our internet is slowly closing down. Like, there, there’s a whole movement to make it harder for agents to use. So if you do the same in a data center and websites detect that it’s an IP from a data center, the website might just block you or it make it really hard or put a lot of captures in the, in the way of the agent. I mean, agents are quite good at happily clicking, “I’m not a robot.”
Lex Fridman (01:58:33) Yeah.
Peter Steinberger (01:58:33) But having that on a residential IP makes a lot of things simpler. So there’s ways. Yeah. But it really does not need to be a Mac. It can… It can be any old hardware. I always say, like, maybe use the… Use the opportunity to get yourself a new MacBook or whatever computer you use and use the old one as your server instead of buying a standalone Mac Mini. But then there’s, again, there’s a lot of very cute things people build with Mac Minis that I like.
Lex Fridman (01:59:08) Yeah.
Peter Steinberger (01:59:08) And no, I don’t get commission from Apple. They didn’t really communicate much.
Lex Fridman (01:59:16) It’s sad. It’s sad. Can you actually speak to what it takes to get started with OpenClaw? There’s… I mean, there’s a lot of people… What is it? Somebody tweeted at you, “Peter, make OpenClaw easy to set up for everyday people. 99.9% of people can’t access to OpenClaw and have their own lobster because of their technical difficulties in getting it set up. Make OpenClaw accessible to everyone, please.” And you replied, “Working on that.” From my perspective, it seems there- there’s a bunch of different options and it’s already quite straightforward, but I suppose that’s if you have some developer background.
Peter Steinberger (01:59:50) I mean, right now you have to paste in one liner into the terminal.
Lex Fridman (01:59:53) Right.
Peter Steinberger (01:59:54) And there’s also an app. The app kind of does that for you, but there should be a Windows app. The app needs to be easier and more loved. The configuration should potentially be web-based or in the app. And I started working on that, but honestly right now I want to focus on security aspects. And, and once I’m confident that this is at a level that I can recommend my mom, then I’m going to make it simpler. Like I…
Peter Steinberger (02:00:27) Right now-
Lex Fridman (02:00:28) You want to make it harder so that it doesn’t scale as fast as it’s scaling.
Peter Steinberger (02:00:32) Yeah, it would be nice if it wouldn’t… I mean, that’s, like, hard to say, right? But if the growth would be a little slower, that would be helpful because people are expecting inhuman things from a single human being. And yes, I have some contributors, but also that whole machinery I started a week ago so that needs more time to figure out. And, and not everyone has all day to work on that.
Lex Fridman (02:01:00) There’s some beginners listening to this, programming beginners. What advice would you give to them about, let’s say, joining the Agentic AI revolution?
Peter Steinberger (02:01:12) Play. Playing is the best… The best way to learn. If you wanna… I’m sure if you… If you are like a little bit of builder, you have an idea in your head that you want to build, just build that, or like, give it a try. It doesn’t need to be perfect. I built a whole bunch of stuff that I don’t use. It doesn’t matter. Like, it’s the journey.
Lex Fridman (02:01:31) Mm-hmm.
Peter Steinberger (02:01:31) You know? Like the philosophical way, that the end doesn’t matter, the journey matters. Have fun.
Lex Fridman (02:01:37) Mm-hmm.
Peter Steinberger (02:01:37) My God, like those things… I… I don’t think I ever had so much fun building things because I can focus on the hard parts now. A lot of coding, I always thought I liked coding, but really I like building.
Lex Fridman (02:01:50) Yeah.
Peter Steinberger (02:01:50) And… And whenever you don’t understand something, just ask. You have an infinitely patient answering machine…. that y- can explain you anything at any level of complexity. Sometimes, that’s like one time I asked, “Hey explain to me like I’m- I’m eight years old,” and it started giving me a story with crayons and stuff. And I’m like, “No, not like that.” Like, I’m okay- … up- up the age a little bit, you know? I’m like, I’m not an actual child, it’s just, I just need a simpler language for like a- a- a- a- a tricky database concept that I didn’t grok in the first- first time. But, you know, just, you can just ask things. Like, you- there’s like… It used to be that I had to go on Stack Overflow or ha- ask on Twitter, and then maybe two days later I get a response.
Peter Steinberger (02:02:37) Or I had to try for hours. And now you- you can just ask stuff. It- I mean, it’s never… You have, like, your own teacher. You know that there’s like statistics, y- you can learn faster if you have your own teacher. The- it’s like you have this infinitely patient machine. Ask it.
Lex Fridman (02:02:53) But what would you say? So use… What’s the easiest way to play? So maybe Open Claw is a nice way to play so you can then set- set everything up and then you could chat with it.
Peter Steinberger (02:03:03) You can also just experiment with it and, like, modify it. Ask your agent. I mean, there is infinite ways how it can be made better. Play around, make it better.
Lex Fridman (02:03:18) Mm-hmm.
Peter Steinberger (02:03:19) More general, if you- if you’re a beginner and you actually wanna learn how to build software really fast, get involved in open source. Doesn’t need to be my project. In fact, maybe don’t use my project because my- my backlog is very large, but I learned so much from open source. Just like, like, be- be humble. Don’t- maybe don’t send a pull request right away. But there’s many other ways you can help out. There’s many ways you can just learn by just reading code. By- by being on Discord or wherever people are, and just, like, understanding how things are built. I don’t know, like Mitchell Hashimoto builds Ghostly, the terminal, and he has a really good community where there’s so many other projects. Like, pick something that you find interesting and get involved.
Lex Fridman (02:04:15) Do you recommend that people that don’t know how to program or don’t really know how to program learn to program also? So when you you can get quite far right now by just using natural language, right? Do you s- still see a lot of value in reading the code, understanding the code, and then being able to write a little bit of code from scratch?
Peter Steinberger (02:04:38) It definitely helps.
Lex Fridman (02:04:39) It’s hard for you to answer that-
Peter Steinberger (02:04:41) Yeah
Lex Fridman (02:04:42) … because you don’t know what it’s like to do any of this without knowing the base knowledge. Like, you might take for granted just how much intuition you have about the programming world having programmed so much, right?
Peter Steinberger (02:04:54) There’s people that are high agency and very curious, and they get very far even though they have no deep understanding how software works just because they ask questions and questions and- and- and-
Lex Fridman (02:05:08) Mm-hmm
Peter Steinberger (02:05:08) … and agents are infinitely patient. Like, part of what I did this year is I went to a lot of iOS conferences because that’s my background and just told people, “Don’t consi- don’t see yourself as an iOS engineer anymore.” Like, “You need to change your mindset. You’re a builder.” And you can take a lot of the knowledge how to build software into new domains and all of the- the more fine-grain details, agents can help. You don’t have to know how to splice an array or what the- what the correct template syntax is or whatever, but you can use all your- your general knowledge and that makes it much easier to move from one galaxy, one tech galaxy into another. And oftentimes, there’s languages that make more or less sense depending on what you build, right?
Peter Steinberger (02:05:58) So for example, when I build simple CLIs, I like Go. I actually don’t like Go. I don’t like the syntax of Go. I didn’t even consider the language. But the ecosystem is great, it works great with agents. It is garbage collected. It’s not the highest performing one, but it’s very fast. And for those type of- of CLIs that I build, Go is- is a really good choice. So I- I use a language I’m not even a fan of for… That’s my main to-go thing for- for CLIs.
Lex Fridman (02:06:29) Isn’t that fascinating that here’s a programming language you would’ve never used if you had to write it from scratch and now you’re using because LMs are good at generating it and it has some of the characteristics that makes it resilient, like garbage collected?
Peter Steinberger (02:06:44) Because everything’s weird in this new world and that just makes the most sense.
Lex Fridman (02:06:48) What’s the best Ridiculous question. What’s the best programming language for the AI- AI agentic world? Is it JavaScript, TypeScript?
Peter Steinberger (02:06:54) TypeScript is really good. Sometimes the types can get really confusing and the ecosystem is- is a jungle. So for- for web stuff it’s good. I wouldn’t build everything in it.
Lex Fridman (02:07:15) Don’t you think we’re moving there? Like, that everything will eventually be written- eventually is written in JavaScript and it-
Peter Steinberger (02:07:22) The birth and death of JavaScript and we are living through it in real time.
Lex Fridman (02:07:26) Like, what does programming look like in 20 years? Right? In 30 years? In 40 years? What do programs and apps look like?
Peter Steinberger (02:07:32) You can even ask a question like, do we need a- a programming language that’s made for agents? Because all of those languages are made for humans. So how- what would that look like? Um, I think there’s a- there’s whole bunch of interesting questions that we’ll discover. And also how because everything is now world knowledge, how it in many ways, things will stagnate ’cause if you build something new and the agent has no idea that’s gonna be much harder to use than something that’s already there. Um…… of when I build Mac apps, I build them in, in Swift and SwiftUI, mm, partly because I like pain, partly because it… the, the deepest level of system integration, I can only get through there.
Peter Steinberger (02:08:18) And you clearly feel a difference if you click on an electron app and it loads a web view in the menu. It’s just not the same. Sometimes I just also try new languages just to, like, get a feel for them.
Lex Fridman (02:08:32) Like Zig?
Peter Steinberger (02:08:33) Yeah. If it’s something that… where I care about performance a lot then it’s, it’s a really interesting language. And it… like agents got so much better over the last six months from not really good to totally valid choice. Just still a, a very young ecosystem. And most of the time you actually care about ecosystem, right? So, so if you build something that does inference or goes into whole running model direction, Python, very good.
Lex Fridman (02:09:06) Mm-hmm.
Peter Steinberger (02:09:07) But then if I build stuff in Python and I want a story where I can also deploy it on Windows, not a good choice.
Lex Fridman (02:09:13) Mm-hmm.
Peter Steinberger (02:09:13) Sometimes I, I found projects that kinda did 90% of what I wanted but were in Python, and I wanted them… I wanted an easy Windows story. Okay, just rewrite it in Go. But then if you go towards multiple, multiple threads and a lot more performance, Rust is a really good choice. There’s no… there’s just no single answer, and it’s also the beauty of it. Like, it’s fun.
Peter Steinberger (02:09:37) And now it doesn’t matter anymore, you can just literally pick the language that has the, the most fitting characteristics and ecosystem-
Lex Fridman (02:09:45) Mm-hmm
Peter Steinberger (02:09:46) … for your problem domain. And yeah, it might be… You might have s-… You might be a little bit slow in reading the code, but not really. Y- I think you, you pick stuff up really fast, and you can always ask your agent.

Life story and career advice

Lex Fridman (02:09:59) So there’s a lot of programmers and builders who draw inspiration from y- your story. Just the way you carry yourself, your choice of making OpenClaw open source, the, the way you have fun building and exploring, and doing that, for the most part, alone or on a small team. So by way of advice, what metric should be the goal that they would be optimizing for? What would be the metric of success? Would it be happiness? Is it money? Is it positive impact for people who are dreaming of building? ‘Cause you went through an interesting journey. You’ve achieved a lot of those things, and then you fell out of love with programming a little bit for a time.
Peter Steinberger (02:10:47) I was just burning too bright for too long. I, I ran… I started PSPDFKit, s- and ran it for 13 years, and it was high stress. Um, I had to learn all these things fast and hard, like how to manage people, how to bring people on, how to deal with customers, how to do…
Lex Fridman (02:11:14) So it wasn’t just programming stuff, it was people stuff.
Peter Steinberger (02:11:17) The stuff that burned me out was mostly people stuff. I, I don’t think burnout is working too much. Maybe to a degree. Everybody’s different. You know, I c- I cannot speak in a- in absolute terms, but for me, it was much more differences with my, my co-founders, conflicts, or, like, really high stress situation with customers that eventually grinded me down. And then when… luckily we, we got a really good offer for, like, putting the company to the next level and I, I already kinda worked two years on making myself obsolete. So at this point I could leave, and, and then I just… I was sitting in front of the screen and I felt like, you know Austin Powers where they suck the mojo out?
Lex Fridman (02:12:13) Yeah.
Peter Steinberger (02:12:14) Uh, I g- I was like, m- m- it was, like, gone. Like, I couldn’t… I couldn’t get code out anymore. I was just, like, staring and feeling empty, and then I, I just stopped. I, I booked, like, a one-way trip to Madrid and, and, and just, like, spent a t- some t- sometime there. I felt like I had to catch up on life, so I did a whole, a whole bunch of life catching up stuff.
Lex Fridman (02:12:47) Did you go through some lows during that period? And you know, maybe advice on… of how to?
Peter Steinberger (02:12:56) Maybe advice on how to approach life. If you think that, “Oh yeah, work really hard and then I’ll retire,” I don’t recommend that. Because the idea of, “Oh yeah, I just enjoy life now,” a- maybe it’s appealing, but right now I enjoy life, the most I’ve ever enjoyed life. Because if you wake up in the morning and you have nothing to look forward to, you have no real challenge, that gets very boring, very fast. And then when, when you’re bored, you’re gonna look for other places how to stimulate yourself, and then maybe, maybe that’s drugs, you know? But that eventually also get boring and you look for more, and that will lead you down a very dark path.

Money and happiness

Lex Fridman (02:13:57) But you also showed on the money front, you know, a lot of people in Silicon Valley and the startup world, they think, maybe overthink way too much optimized for money. And you’ve also shown that it’s not like you’re saying no to money. I mean, I’m sure you take money, but it’s not…… the primary objective of uh, of your life. Can you just speak to that? Your philosophy on money?
Peter Steinberger (02:14:20) When I built my company, money was never the driving force. It felt more like, like, an affirmation that I did something right. And having money solves a lot of problems. I also think there, there’s diminishing returns the more you have. Like, a cheeseburger is a cheeseburger, and I think if you go too far into, oh, I do private jet and I only travel luxury, you disconnect with society. Um, I, I donated quite a lot. Like, I have a, I have a foundation for helping people that weren’t so lucky.
Lex Fridman (02:15:11) And disconnecting from society is bad in that on many levels, but one of them is, like, humans are awesome. It’s nice to continuously remember the awesomeness in humans.
Peter Steinberger (02:15:23) I, I mean, I could afford really nice hotels. The last time I was in San Francisco, I did the, the first time the OG Airbnb experience-
Lex Fridman (02:15:30) Yeah, yeah
Peter Steinberger (02:15:30) … and just booked a room. Mostly because I, I thought, okay, you know, I’m out or I’m sleeping, and I don’t like where all the hotels are, and I wanted a, I wanted a different experience. I think, isn’t life all about experiences? Like, if you, if you tailor your life towards, “I wanna have experiences,” it, it reduces the need for, “It needs to be good or bad.” Like, if people only want good experiences, that’s not gonna work, but if you optimize for experiences, if it’s good, amazing. If it’s bad, amazing, because, like, I learned something, I saw something, did something. I wanted to experience that, and it was amazing. Like, there was, like, this, this queer DJ in there, and I showed her how to make music with cloud code. And we, like, immediately bonded and had a great time.
Lex Fridman (02:16:24) Yeah, there’s something about that air- you know, couch surfing, Airbnb experience, the OG. I’m still to this day. It’s awesome. It’s humans, and that’s why travel is awesome.
Peter Steinberger (02:16:34) Yeah.
Lex Fridman (02:16:34) Just experience the variety of, the diversity of human. And when it’s shitty, it’s good too, man. If it rains and you’re soaked and it’s all fucked, and planes, the everything is shit, everything is fucked, it’s still awesome. If you’re able to open your eyes it’s good to be alive.
Peter Steinberger (02:16:49) Yeah, and anything that creates emotion and feelings is good.
Lex Fridman (02:16:55) .
Peter Steinberger (02:16:55) Even… So, so maybe, maybe even the cryptic people are good because they definitely created emotions. I, I don’t know if I should go that far.
Lex Fridman (02:17:02) No, man. Give them, give them all, give them love. Give them love. Because I do think that online lacks some of the awesomeness of real life.
Peter Steinberger (02:17:13) Yeah.
Lex Fridman (02:17:13) That’s, that’s, it’s an open problem of how to solve, how to infuse the online cyber experience with I don’t know with the intensity that we humans feel when it’s in real life. I don’t know. I don’t know if that’s a solvable problem.
Peter Steinberger (02:17:31) Well, it’s just possible because text is very lossy.
Lex Fridman (02:17:35) Yeah.
Peter Steinberger (02:17:35) You know, sometimes I wish if I talked to the agent I would… It should be multi-model so it also understands my emotions.
Lex Fridman (02:17:43) I mean, it, it might move there. It might move there.
Peter Steinberger (02:17:46) It will. It will. It totally will.

Acquisition offers from OpenAI and Meta

Lex Fridman (02:17:49) I mean, I have to ask you, just curious. I, I know you’ve probably gotten huge offers from major companies. Can you speak to who you’re considering working with?
Peter Steinberger (02:18:04) Yeah. So, to like explain my thinking a little bit, right, I did not expect this blowing up so much. So, there’s a lot of doors that opened because of it. There’s, like, I think every VC, every big VC company is in my inbox and tried to get 15 minutes of me. So, there’s, like, this butterfly effect moment. I could just do nothing and continue and I really like my life. Valid choice. Almost. Like, I considered it when I delete it, wanted to delete the whole thing. I could create a company. Been there, done that. There’s so many people that push me towards that and, yeah, like, could be amazing.
Lex Fridman (02:19:07) Which is to say that you, you would probably raise a lot of money in that.
Peter Steinberger (02:19:10) Yeah.
Lex Fridman (02:19:11) I don’t know, hundreds of millions, billions. I don’t know. It could just got unlimited amount of money.
Peter Steinberger (02:19:15) Yeah. It just doesn’t excite me as much because I feel I did all of that, and it would take a lot of time away from the things I actually enjoy. Same as when, when I was CEO, I think I, I learned to do it and I’m not bad at it, and partly I’m good at it. But yeah, that path doesn’t excite me too much, and I also fear it, it would create a natural conflict of interest. Like, what’s the most obvious thing I do? I, I prioritize it. I put, like, a version safe for workplace. And then what do you do? Like, I get a pull request with a feature like an audit log, but that seems like an enterprise feature, so now I feel I have a conflict of interest in the open-source version and the closed-source version….
Peter Steinberger (02:20:15) or change the license to something like FSL, where you cannot actually use it for commercial stuff, would first be very difficult with all the contributions. And second of all, I- I like the idea that it’s free as in beer and not free with conditions. Yeah, there’s ways how you, how you keep all of that for free and just, like, still try to make money, but those are very difficult. And you see there’s, like, fewer and fewer companies manage that. Like, even Tailwind, they’re, like, used by everyone. Everyone uses Tailwind, right? And then they had to cut off 75% of the employees because they’re not making money because nobody’s even going on the website anymore because it’s all done by agents. S- and just relying on donations, yeah, good luck.
Peter Steinberger (02:21:04) Like, if a project of my caliber, if I extrapolate what the typical open-source project would get it’s not a lot. I s- I still lose money on the project because I made the point of supporting every dependency, except Slack. They are a big company. They can, they can, they can do without me. But all the projects that are done by mostly individuals so, like, all the, right now, all the sponsorship goes right up to my dependencies. And if there’s more, I want to, like, buy my contributors some merch, you know?
Lex Fridman (02:21:43) So you’re losing money?
Peter Steinberger (02:21:44) Yeah, right now I lose money on this.
Lex Fridman (02:21:46) So it’s really not sustainable?
Peter Steinberger (02:21:48) Uh, I mean, it’s like, I guess something between 10 and 20K a month. Which is fine. I’m sure over time I could get that down. Um, OpenAI is helping out a little bit with tokens now. And there’s other companies that have been generous. But yeah, still losing money on that. So that’s- that’s one path I consider, but I’m just not very excited. And then there’s all the big labs that I’ve been talking to. And from those Meta and OpenAI seem the most interesting.
Lex Fridman (02:22:32) Do you lean one way or the other?
Peter Steinberger (02:22:34) Yeah. Um… Not sure how much I should share there. It’s not quite finalized yet. Let’s- let’s just say, like, on either of these, my conditions are that the project stays open source. That it… Maybe it’s gonna be a model like Chrome and Chromium. Um, I think this is- this is too important to just give to a company and make it theirs. It… This is… And we didn’t even talk about the whole community part, but, like, the- the thing that I experienced in San Francisco, like at ClawCon, seeing so many people so inspired, like… And having fun and just, like, building shit, and, like, having, like, robots in lobster stuff walking around. Like, the…
Peter Steinberger (02:23:37) People told me, like, they didn’t experience this level of- of community excitement since, like, the early days of the internet, like 10, 15 years. And there were a lot of high caliber people there, like… Um, I was amazed. I also, like, was very sensory overloaded because too many people wanted to do selfies. But I love this. Like, this needs to stay a place where people can, like, hack and learn. But also, I’m very excited to, like, make this into a version that I can get to a lot of people because I think this is the year of personal agents, and that’s the future. And the fastest way to do that is teaming up with one of the labs. And I also, on a personal level, I never worked at a large company, and I’m intrigued. You know, we talk about experiences. Will I like it? I don’t know.
Peter Steinberger (02:24:42) But I want that experience. Uh, I- I’m sure, like, if- if I- if I announce this, then there will be people like, “Oh, he sold out,” blah, blah, blah. But the project will continue. From everything I talked to so far, I can even have more resources for that. Like, both s- both of those companies understand the value that I created something that accelerates our timeline and that got people excited about AI. I mean, can you imagine? Like, I installed OpenClaw on one of my, I’m sorry, normie friends. I’m sorry, Vahan. But he’s just a… You know?
Peter Steinberger (02:25:32) Like, he’s-
Lex Fridman (02:25:33) Normie with love, yeah. For sure.
Peter Steinberger (02:25:34) He- he, like, someone who uses the computer, but never really… Like, yeah, use some ChatGPT sometimes, but not very technical. Wouldn’t really understand what I built. So, like, I’ll show you, and I- I paid for him the- the 90 buck, 100 buck, I don’t know, subscription for Entropic. And set up everything for him with, like, WSL Windows.
Lex Fridman (02:26:00) Mm-hmm.
Peter Steinberger (02:26:00) I was also curious, would it actually work on Windows, you know? Was a little early. And then within a few days, he was hooked. Like, he texted me about all the things he learned. He built, like, even little tools. He’s not a programmer. And then within a few days he upgraded to the $200 subscription. Or euros, because he’s in Austria…. and he was in love with that thing. That, for me, was like a very early product validation. It’s like, I built something that captures people. And then, a few days later, Entropic blocked him because, based on their rules using the subscription is problematic or whatever. And he was, like, devastated. And then he signed up for Mini Max for 10 bucks a month and uses that.
Peter Steinberger (02:26:56) And I think that’s silly in many ways, because you just got a 200 buck customer. You just made someone hate your company, and we are still so early. Like, we don’t even know what the final form is. Is it gonna be cloud code? Probably not, you know? Like, that seems very… It seems very short-sighted to lock down your product so much. All the other companies have been helpful. I- I’m in Slack of, of most of the big labs. Kind of everybody understands that we are still in an era of exploration, in the area of the radio shows on TV and not, and not a modern TV show that fully uses the format.
Lex Fridman (02:27:45) I think, I think you’ve made a lot of people, like, see the possibility. And non- Uh, sorry. Non, non-technical people see the possibility of AI, and just fall in love with this idea, and enjoy interacting with AI. And that’s a bea- That’s a really beautiful thing. I think I also speak for a lot of people in saying, I think you’re one of the, the great people in AI in terms of having a good heart, good vibes, humor, the right spirit. And so it would, in a sense, this model that you’re describing, having open source part, and you being part of uh, also building a thing inside, additionally, of a large company would be great, because it’s great to have good people in those companies.
Peter Steinberger (02:28:36) Yeah. You know, what also people don’t really see is… I made this in three months. I did other things as well. You know, I have a lot of projects. Like, this is not… Yeah, in January, this was my main focus because I saw the storm coming. But before that, I built a whole bunch of other things. Um, I have so many ideas. Some should be there, some would be much better fitted when I have access to the latest toys- Uh, and I, I kind of want to have access to, like, the latest toys. So this is important, this is cool, this will continue to exist. My, my short-term focus is, like, working through those… Is it two- Is it 3,000 PRs now by now? I don’t even know. Like, there’s, there’s a little bit of backlog.
Peter Steinberger (02:29:23) But this is not gonna be the thing that I’m gonna work until I’m, I’m, I’m 80, you know? This is… This is a window into the future. I’m gonna make this into a cool product. But yeah, I have like… I have more ideas.
Lex Fridman (02:29:36) If you had to pick, is there a company you lean? So Meta, OpenAI, is there one you lean towards going?
Peter Steinberger (02:29:44) I spend time with both of those. And it’s funny, because a few weeks ago, I didn’t consider any of this. Um… And it’s really fucking hard. Like-
Lex Fridman (02:30:05) Yeah.
Peter Steinberger (02:30:06) I have some… I know no people at OpenAI. I love their tech. I think I’m the biggest codex advertisement shill that’s unpaid. And it would feel so gratifying to, like, put a price on all the work I did for free. And I would love if something happens and those companies get just merged, because it’s like…
Lex Fridman (02:30:32) Is this the hardest decision you’ve ever had to do?
Peter Steinberger (02:30:39) No. You know, I had some breakups in the past that feel like it’s the same level.
Lex Fridman (02:30:43) Relationships, you mean?
Peter Steinberger (02:30:45) Yeah.
Lex Fridman (02:30:47) Yeah, yeah, yeah, yeah.
Peter Steinberger (02:30:48) And, and I also know that, in the end, they’re both amazing. I cannot go wrong. This is like-
Lex Fridman (02:30:53) Right.
Peter Steinberger (02:30:54) This is, like, one of the most prestigious and, and, and, and, and largest… I mean, not largest, but, like, they’re both very cool companies.
Lex Fridman (02:31:02) Yeah, they both really know scale. So, if you’re thinking about impact, some of the wonderful technologies you’ve been exploring, how to do it securely, and how to do it at scale, such that you can have a positive impact on a large number of people. They both understand that.
Peter Steinberger (02:31:19) You know, both Ned and Mark basically played all week with my product, and sent me like, “Oh, this is great.” Or, “This is shit. Oh, I need to change this.” Or, like, funny little anecdotes. And people using your stuff is kind of like the biggest compliment, and also shows me that, you know, they actually… T- they actually care about it. And I didn’t get the same on the OpenAI side. Um, I got… I got to see some other stuff that I find really cool, and they lure me with… I cannot tell you the exact number because of NDA, but you can, you can be creative and, and think of the Cerebras deal and how that would translate into speed. And it was very intriguing. You know, like, you give me Thor’s hammer. Yeah. … been lured with tokens. So, yeah.
Lex Fridman (02:32:34) So, it- it’s funny. So, so Marc started tinkering with the thing, essentially having fun with the thing.
Peter Steinberger (02:32:41) He got… He… Like, when he first… When he first approached me, I got him in my, in my WhatsApp and he was asking, “Hey, when are we have a call?” And I’m like, “I don’t like calendar entries. Let’s just call now.” And he was like, “Yeah, give me 10 minutes, I need to finish coding.”
Lex Fridman (02:33:01) Mm-hmm.
Peter Steinberger (02:33:01) Well, I guess that gives you street cred. It’s like, ugh, like, he’s still writing code. You know, he’s-
Lex Fridman (02:33:07) Yeah, he does
Peter Steinberger (02:33:07) … he didn’t drift away in just being a manager, he gets me. That was a good first start. And then I think we had a, like, a 10-minute fight what’s better, cloud code or Codex. Like, that’s the thing you first do, like, you casually call-
Lex Fridman (02:33:24) Yeah, that’s awesome
Peter Steinberger (02:33:24) … someone with, like, the- that owns one of the largest companies in the world and, and you have a 10 minutes conversation about that.
Lex Fridman (02:33:30) Yeah, yeah.
Peter Steinberger (02:33:30) And then I think afterwards he called me eccentric but brilliant. But I also had some… I had some really, really cool discussion with Sam Altman and he’s, he’s very thoughtful brilliant and I like him a lot from the, from the little time I had, yeah. I mean, I know it’s peop- some people vilify both of those people. I don’t think it’s fair.
Lex Fridman (02:34:15) I think no matter what the stuff you’re building and the kind of human you are doing stuff at scale is kinda awesome. I’m excited.
Peter Steinberger (02:34:24) I am super pumped. And you know the beauty is if, if it doesn’t work out, I can just do my own thing again. Like, I, I told them, like, I, I don’t do this for the money, I don’t give a fuck. I-
Lex Fridman (02:34:42) Yeah.
Peter Steinberger (02:34:42) I mean, of course, of course it’s a nice compliment but I wanna have fun and have impact, and that’s ultimately what made my decision.

How OpenClaw works

Lex Fridman (02:34:58) Can I ask you about… we’ve talked about it quite a bit, but maybe just zooming out about how OpenCloud works. We talked about different components, I want to ask if there’s some interesting stuff we missed. So, there’s the gateway, there’s the chat clients, there’s the harness there’s the agentic loop. You said somewhere that everybody should im- implement an agent loop at some point in their lives.
Peter Steinberger (02:35:24) Yeah, because it’s like the, it’s like the Hello World in AI, you know? And it’s actually quite simple.
Lex Fridman (02:35:30) Yeah.
Peter Steinberger (02:35:30) And it- it’s good to understand that that stuff’s not magic. You can, you can easily build it yourself. So, writing your own little cloud code… I, I even did this at a conference in Paris for people to, like, introduce them to AI. I think it’s it’s a fun little practice. And you, you covered a lot. I think one, one silly idea I had that turned out to be quite cool is I built this thing with full system access. So it’s like, you know, with great power comes great responsibility.
Peter Steinberger (02:36:09) And I was like, “How can I up the stakes a little bit more?”
Lex Fridman (02:36:13) Yeah, right.
Peter Steinberger (02:36:14) And I just made a… I made it proactive. So, I added a prompt. Initially, it was just a prompt, surprise me. Every, like, half an hour, surprise me, you know? And later on I changed it to be like a little more specific and-
Lex Fridman (02:36:31) Yeah
Peter Steinberger (02:36:31) … in the definition of surprise. But the fact that I made it proactive and that it knows you and that it cares about you, it- it’s at least it’s programmed to that, prompted to do that. And that, that is a follow on, on your current session makes it very interesting because it would just sometimes ask a follow-up question or like, “How’s your day?”
Lex Fridman (02:36:53) Yeah, right.
Peter Steinberger (02:36:53) And I just made a… I made it proactive. So, I added a prompt. Initially, it was just a prompt, surprise me. Every, like, half an hour, surprise me, you know? And later on I changed it to be like a little more specific and-
Lex Fridman (02:36:58) Yeah
Peter Steinberger (02:36:58) … in the definition of surprise. But the fact that I made it proactive and that it knows you and that it cares about you, it- it’s… at least it’s programmed to that, prompted to do that. And that, that is a follow on, on your current session makes it very interesting because it would just sometimes ask a follow-up question or like, “How’s your day?” I mean, again, it’s a little creepy or weird or interesting but Heartbeat very… in the beginning, it’s still… today, it doesn’t… the model doesn’t choose to use it a lot.
Lex Fridman (02:37:16) By the way, we’re, we’re, we’re talking about Heartbeat, as you mentioned, the thing that regularly-
Peter Steinberger (02:37:22) Yeah. Like kicks-
Lex Fridman (02:37:23) … Acts.
Peter Steinberger (02:37:23) You just kick off the loop.
Lex Fridman (02:37:25) Isn’t that just a cron job, man?
Peter Steinberger (02:37:27) Yeah, right, I mean, it’s like-
Lex Fridman (02:37:29) It’s the cr- the criticisms that you get are hilarious.
Peter Steinberger (02:37:31) You can, you can deduce any idea to like a silly… Yeah, it’s just, it’s just a cron job in the end. I have like cron- separate cron jobs.
Lex Fridman (02:37:41) Isn’t love just evolutionary biology manifesting itself and isn’t… aren’t you guys just using each other?
Peter Steinberger (02:37:49) And then, yeah, and the project is all just glue of a few different dependencies-
Lex Fridman (02:37:52) Yeah
Peter Steinberger (02:37:53) … and there’s nothing original. Why do people… Well, you know, isn’t Dropbox just FTP with extra steps?
Lex Fridman (02:38:00) Yeah.
Peter Steinberger (02:38:01) I found it surprising where I had this I had a shoulder operation a few months ago, so.
Lex Fridman (02:38:06) Mm-hmm.
Peter Steinberger (02:38:08) And the model rarely used Heartbeat, but then I was in the hospital, and it knew that I had the operation and it checked up on me. It’s like, “Are you okay?” And I just… It’s like, again, apparently, like, if something’s significant in the context, that triggered the Heartbeat when it rarely used the Heartbeat…. And it does that sometimes for people, and that just makes it a lot more relatable.
Lex Fridman (02:38:36) Let me look this up on Perplexity, how OpenCall works just to see if I’m missing any of the stuff. Local agent run time, high-level architecture. There’s… Oh, we haven’t talked much about skills, I suppose. Skill hub, the tools in the skill lair, but that’s definitely a huge component and there’s a huge growing set of skills-
Peter Steinberger (02:38:55) You know, you know what I love? That half a year ago, like everyone was talking about MCPs-
Lex Fridman (02:39:02) Yeah
Peter Steinberger (02:39:02) … and I was like, “Screw MCPs. Every MCP would be better as a CLI.” And now this stuff doesn’t even have MCP support. I mean, it, it has with asterisks, but not in the core lair, and nobody’s complaining.
Lex Fridman (02:39:23) Mm-hmm.
Peter Steinberger (02:39:24) So my approach is if you want to extend the model with more features, you just build a CLI and the model can call the CLI, probably gets it wrong, calls the help menu, and then on demand loads into the context what it needs to use the CLI. It just needs a sentence to know that the CLI exists if it’s something that the model doesn’t know about default. And even for a while, I, I didn’t really care about skills, but skills are actually perfect for that because they, they boil down to a single sentence that explains the skill and then the model loads the skill, and that explains the CLI, and then the model uses the CLI. Some skills are, like raw, but most of the time, networks.
Lex Fridman (02:40:16) It’s interesting um, I’m asking Perplexity MCP versus skills, because this kind of requires a hot take that’s quite recent, because your general view is MCPs are dead-ish. So MCPs is a more structured thing. So if you listen to Perplexity here, MCP is what can I reach? So APIs, database services files via protocol. So a structured protocol of how you communicate with a thing, and then skills is more how should I work? Procedures, hostile helper scripts and prompts are often written in a kind of semi-structured natural language, right? And so technically skills could replace MCP if you have a smart enough model.
Peter Steinberger (02:41:00) I think the main beauty is, is that models are really good at calling Unix commands. So if you just add another CLI, that’s just another Unix command in the end. And MCP is… That has to be added in training. That’s not a very natural thing for the model. It requires a very specific syntax. And the biggest thing, it’s not composable. So imagine if I have a service that gives me better data and gives me the temperature, the average temperature, rain, wind and all the other stuff, and I get like this huge blob back. As a model, I always have to get the huge blob back. I have to fill my context with that huge blob and then pick what I want. There’s no way for the model to naturally filter unless I think about it proactively and add a filtering way into my MCP.
Peter Steinberger (02:41:53) But if I would build the same as a CLI and it would give me this huge blob, it could just add a JQ command and filter itself and then only, only get me what I actually need. Or maybe even compose it into a script to, like do some calculations with the temperature and only give me the exact output and the mo- and the… you have no context pollution. Again, you can solve that with like sub-agents and more charades, but it’s just like workarounds for something that might not be the optimal way. There’s… It definitely it was, you know, it was good that we had MCPs because it pushed a lot of companies towards building APIs and now I, I can like look at an MCP and just make it into a CLI.
Lex Fridman (02:42:37) Mm-hmm.
Peter Steinberger (02:42:37) But this, this inherent problem that MCPs by default clutter up your context. Plus the fact that most MCPs are not made good, in general make it just not a very useful paradigm. There’s some exceptions like Playwright for example that requires state and it’s actually useful. That is an acceptable choice.
Lex Fridman (02:43:05) So Playwright you use for browser use, which I think is c- already in OpenClaw is quite incredible, right?
Peter Steinberger (02:43:11) Yeah.
Lex Fridman (02:43:12) You can basically do everything, most things you can think of using browser use.
Peter Steinberger (02:43:17) That, that gets into the whole arch of every app is just a very slow API now, if they want or not. And that through personal agents a lot of apps will disappear. You know, like I had a… I built a CLI for Twitter. I mean, I- I just reverse engineered their website and used the internal API, which is not very allowed.
Lex Fridman (02:43:50) It’s called Bird, short-lived.
Peter Steinberger (02:43:53) It was called Bird, because the bird had to disappear.
Lex Fridman (02:43:57) The, the wings were clipped.
Peter Steinberger (02:43:59) All they did is they just made access slower. Yeah, not tak- you’re not actually taking a feature away, but now inst- if, if your agent wants to read a tweet, it actually has to open the browser and read the tweet. And it will still be able to read the tweet. It will just take longer. It’s not like you are making something that was possible, not possible. No. Now, it’s just taking… Now it’s just a bit slower. So, so it doesn’t really matter if your service wants to be an API or not. If I can access it in the browser…… easy API. It’s a slow API.
Lex Fridman (02:44:35) Can you empathize with their situation? Like, what would you do if you were Twitter, if you were X? Because they’re basically trying to protect against other large companies scraping all their data.
Peter Steinberger (02:44:45) Yeah.
Lex Fridman (02:44:46) But in so doing, they’re cutting off like a million different use cases for smaller developers that actually want to use it for helpful cool stuff.
Peter Steinberger (02:44:54) I think that if you have a very low per day baseline per account that allows read-only access would solve a lot of problems. There’s plenty, plenty of automations where people create a bookmark and then use OpenClaw to, like, find the bookmark, do research on it, and then send you an email-
Lex Fridman (02:45:16) Mm-hmm
Peter Steinberger (02:45:16) … with, like, more details on it or a summary. That’s a cool approach. I also want all my bookmarks somewhere to search. I would still like to have that.
Lex Fridman (02:45:26) So, read-only access for the bookmarks you make on X. That seems like an incredible application because a lot of us find a lot of cool stuff on X, we bookmark, that’s the general purpose of X. It’s like, holy shit, this is awesome. Oftentimes, you bookmark so many things you never look back at them.
Peter Steinberger (02:45:40) Yeah.
Lex Fridman (02:45:40) It would be nice to have tooling that organizes them and allows you to research it further.
Peter Steinberger (02:45:44) Yeah, I mean, and to be frank, I, I mean, I, I told Twitter proactively that, “Hey, I built this and there’s a need.” And they’ve been really nice, but also like, “Take it down.” Fair. Totally fair. But I hope that this woke up the team a little bit that there’s a need. And if all you do is making it slower, you’re just reducing access to your platform. I’m sure there’s a better way. I also, I’m very much against any automation on Twitter. If you tweet at me with AI, I will block you. No first strike. As soon as it smells like AI, and AI still has a smell.

AI slop

Lex Fridman (02:46:31) Mm-hmm.
Peter Steinberger (02:46:32) Especially on tweets. It’s very hard to tweet in a way that does look completely human.
Lex Fridman (02:46:38) Mm-hmm.
Peter Steinberger (02:46:38) And then I block. Like, I have a zero tolerance policy on that. And I think it would be very helpful if they, if, like, tweets done via API would be marked. Maybe there’s some special cases where… But, and there should be, there should be a very easy way for agents to get their own Twitter account. Um…
Lex Fridman (02:47:04) Mm-hmm.
Peter Steinberger (02:47:07) We, we need to rethink social platforms a little bit if, if, if we, we, we go towards a future where everyone has their agent and agents maybe have their own Instagram profiles or Twitter accounts, so I can, like, do stuff on my behalf. I think it should very clearly be marked that they are doing stuff on my behalf and it’s not me. Because content is now so cheap. Eyeballs are the expensive part. And I find it very triggering when I read something and then I’m like, oh, no, this smells like AI.
Lex Fridman (02:47:41) Yeah. Like, where, where is this headed in terms of what we value about the human experience? It feels like we’ll, we’ll move more and more towards in-person interaction and we’ll just communicate. We’ll talk to our AI agent to, to accomplish different tasks, to learn about different things, but we won’t value online interaction because there’ll be so much AI slob that smells and so many bots that it’s difficult.
Peter Steinberger (02:48:15) Well, if it’s smart, then it shouldn’t be difficult to filter. And then I can look at it if I want to. But yeah, this is, like, a big thing we need to solve right now. E- especially on this project, I get so many emails that are, let’s say nicely, agentically written.
Lex Fridman (02:48:36) Yeah.
Peter Steinberger (02:48:36) But I much rather read your broken English than your AI slob. You know, of course there’s a human behind it, and yet they, they prompt it. I’d much rather read your prompt than what came out. Um, I think we’re reaching a point where I value typos again.
Lex Fridman (02:48:56) Yeah.
Peter Steinberger (02:48:56) Like… Like, and I, I mean, it also took me a while to, like, come to the realization. I, on my blog I experimented with creating a blog post with agents and ultimately it took me about the same time to, like, steer agent towards something I like. But it missed the nuances that, how I would write it. You know, you can like, you can steer it towards your style, but it’s not gonna be all your style. So, I, I completely moved away from that. I, I, everything, everything I blog is organic, handwritten and maybe, maybe I, I, I use AI as a fix my worse typos. But there’s value in the rough parts of an actual human.
Lex Fridman (02:49:53) Isn’t that awesome? Isn’t that beautiful? That now because of AI we value the raw humanity in each of us more.
Peter Steinberger (02:50:02) I also, I also realized this thing that I, I rave about AI and use it so much for anything that’s code, but I’m allergic if it’s stories.
Lex Fridman (02:50:12) Right. Yeah.
Peter Steinberger (02:50:14) Also, documentation, still fine with AI. You know, better than nothing.
Lex Fridman (02:50:17) And for now it’s still i- it applies in the mi- in the visual medium too. It’s fascinating how allergic I am to even a little bit of AI slob in in video and images. It’s useful, it’s nice if it’s like a little component of like-
Peter Steinberger (02:50:32) Or even, even those images. The, like, all these infographics and stuff, the-… they trigger me so hard.
Lex Fridman (02:50:38) Yeah.
Peter Steinberger (02:50:39) Like, it immediately makes me think less of your content. And it … They were novel for, like, one week and now it just screams slop.
Lex Fridman (02:50:50) Yeah.
Peter Steinberger (02:50:51) Even- even if people work hard on it, using … And I- I have some on my blog post, you know, in the- in the time where I- I explored this new medium. But now, they trigger me as well. It’s like, yeah, this is … This just screams AI slop. I-
Lex Fridman (02:51:06) What… I don’t know what that is, but I went through that too. I was really excited by the diagrams. And then I realized, in order to remove from them hallucinations, you actually have to do a huge amount of work. And you’re just using it to draw the better diagrams, great. And then I’m proud of the diagram. I’ve used them for literally, like, ki- ki- kind of like you said for maybe a couple of weeks. And now I look at those, and I- I feel like I feel when I look at Comic Sans as a font or- or something like this.
Lex Fridman (02:51:32) It’s like, “No, this is-“
Peter Steinberger (02:51:35) It’s a smell.
Lex Fridman (02:51:35) “… this is fake. It’s fraudulent. There’s something wrong with it.” And it…
Peter Steinberger (02:51:41) It’s a smell.
Lex Fridman (02:51:42) It’s a smell.
Peter Steinberger (02:51:44) It’s a smell.
Lex Fridman (02:51:44) And it’s awesome because it re- it reminds you that we know. There’s so much to humans that’s amazing and we know that. And we- we know it. We know it when we see it. And so that gives me a lot of hope, you know? That gives me a lot of hope about the human experience. It’s not going to be damaged by … It’s only going to be empowered as tools by AI. It’s not going to be damaged or limited or somehow altered to where it’s no longer human. So … Uh, I need a bathroom break. Quick pause. You mentioned that a lot of the apps might be basically made obsolete. Do you think agents will just transform the entire app market?

AI agents will replace 80% of apps

Peter Steinberger (02:52:30) Yeah. Uh, I noticed that on Discord, that people just said how their … like, what they build and what they use it for. And it’s like, why do you need MyFitnessPal when the agent already knows where I am? So, it can assume that I make bad decisions when I’m at, I don’t know, Waffle House, what’s around here? Or- or briskets in Austin.
Lex Fridman (02:52:57) There’s no bad decisions around briskets, but yeah.
Peter Steinberger (02:53:00) No, that’s the best decision, honestly. Um-
Lex Fridman (02:53:03) Your agent should know that.
Peter Steinberger (02:53:04) But it can, like … It can modify my- my gym workout based on how well I slept, or if I’m … if I have stress or not. Like, it has so much more context to make even better decisions than any of this app even could do.
Lex Fridman (02:53:18) Mm-hmm.
Peter Steinberger (02:53:19) It could show me UI just as I like. Why do I still need an app to do that? Why do I have to … Why should I pay another subscription for something that the agent can just do now? And why do I need my- my Eight Sleep app to control my bed when I can tell the a- … tell the agent to … You know, the agent already knows where I am, so he can, like, turn off what I don’t use.
Lex Fridman (02:53:45) Mm-hmm.
Peter Steinberger (02:53:47) And I think that will … that will translate into a whole category of apps that are no longer … I will just naturally stop using because my agent can just do it better.
Lex Fridman (02:54:00) I think you said somewhere that it might kill off 80% of apps.
Peter Steinberger (02:54:04) Yeah.
Lex Fridman (02:54:05) Don’t you think that’s a gigantic transformative effect on just all software development? So that means it might kill off a lot of software companies.
Peter Steinberger (02:54:13) Yeah. Um-
Lex Fridman (02:54:16) It’s a scary thing. So, like, do you think about the impact that has on the economy? On just the ripple effects it has to society? Transforming who builds what tooling. It empowers a lot of users to get stuff done, to get stuff more efficiently, to get it done cheaper.
Peter Steinberger (02:54:41) It’s also new services that we will need, right? For example, I want my agent to have an allowance. Like, you solve problems for me, here’s like 100 bucks in order to solve problems for me. And if I tell you to order me food, maybe it uses a service. Maybe it uses something like rent-a-human to, like, just get that done for me.
Lex Fridman (02:55:06) Mm-hmm.
Peter Steinberger (02:55:06) I don’t actually care. I care about solve my problem. There’s space for- for new companies to solve that well. Maybe don’t … Not all apps disappear. Maybe some transform into being API.
Lex Fridman (02:55:21) So, basically, apps that rapidly transform in being agent-facing. So, there’s a real opportunity for, like, Uber Eats, that we just used earlier today. It- it’s companies this, of which there’s many. Who gets there fastest to being able to interact with OpenClaw in a way that’s the m- the most natural, the easiest?
Peter Steinberger (02:55:50) Yeah. And also, apps will become API if they want or not. Because my agent can figure out how to use my phone. I mean, on- on the other side, it’s a little more tricky. On Android, that’s already … People already do that. And then we’ll just click the Order Uber for Me button for me. Or maybe another service. Or maybe there’s- there’s a … there’s an API I can call so it’s faster. Uh, I think that’s a space we’re just beginning to even understand what that means. And I … Again, I didn’t even … That was not something I thought of. Something that I- that I discovered as people use this, and it … We are still so early. But yeah, I think data is very important. Like, apps that can give me data, but that also can be API. Why do I need a Sonos app anymore when I can …
Peter Steinberger (02:56:44) when my agent can talk to the Sonos?… Speakers directly. Like my cameras, there’s like a crappy app, but they have, they have an API, so my agent uses the API now.
Lex Fridman (02:56:57) So it’s gonna force a lot of companies to have to shift focus. That’s kind of what the internet did, right? You have to rapidly rethink, reconfigure what you’re selling, how you’re making money.
Peter Steinberger (02:57:10) Yeah, and some companies were really not like that. For example, there’s no CLI for Google, so I had to like, do… have to do anything myself and build GAWK. That’s like a CLI for Google. And at the… Yeah, at the end user, they have to give me the emails because otherwise I cannot use their product. If I’m a company and I try to get Google data, Gmail, there’s a whole complicated process, to the point where sometimes startups acquire startups that went through the process, so they don’t- don’t have to work with Google for half a year to be certified to being able to access Gmail. But my agent can access Gmail because I can just connect to it. It’s still crappy because I need to, like, go through Google’s developer jungle to get a key, and that’s still annoying.
Peter Steinberger (02:58:09) But they cannot prevent me. And worst case, my agent just clicks on the, on the website and gets the data out that way.
Lex Fridman (02:58:17) Through browsers?
Peter Steinberger (02:58:18) Yeah. I mean, I, I watch my agent happily click the I’m not a robot button. And there’s this, this whole… That’s gonna be… That’s gonna be more heated. You see companies like Cloudflare that try to prevent bot access. And in some ways, that’s useful for scraping. But in other ways, if I’m, I’m a personal user, I want that. You know, sometimes I, I use Codex and I, I read an article about modern React patterns, and it’s like a Medium article. I paste it in and the agent can’t read it because they block it. So then I have to copy-paste the actual text. Or in the future, I’ll learn that maybe I don’t click on Medium because it’s annoying, and I use other websites that actually are agent friendly.
Peter Steinberger (02:59:12) So, uh-
Lex Fridman (02:59:13) There’s gonna be a lot of powerful, rich companies fighting back. So it’s really intere- You’re at the center, you’re the catalyst, the leader, and happen to be at the center of this kind of revolution where it’s get- gonna completely change how we interact with services with, with web. And so, like, there’s companies at Google that are gonna push back. I mean, there’s every major companies you could think of is gonna push back.
Peter Steinberger (02:59:39) Even… Yeah, even search. Um, I now use, I think Perplexity or Brave as providers because Google really doesn’t make it easy to use Google without Google. I’m not sure if that’s the right strategy, but I’m not Google.
Lex Fridman (02:59:58) Yeah, there’s a, there’s a nice balance from a big company perspective ’cause if you push back too much for too long, you become Blockbuster and you lose everything to the Netflixes of the world. But some pushback is probably good during a revolution to see.
Peter Steinberger (03:00:11) Yeah. But you see that, that… Like, this is something that the people want.
Lex Fridman (03:00:14) Right.
Peter Steinberger (03:00:14) So-
Lex Fridman (03:00:15) Yes.
Peter Steinberger (03:00:16) If I’m on the go, I don’t wanna open a calendar app. I just… I wanna tell my agent, “Hey, remind me about this dinner tomorrow night,” and maybe invite two of my friends and then maybe send a what- send a WhatsApp message to my friend. And I don’t need… I don’t want or need to open apps for that. I think that we passed that age, and now everything is, like, much more connected and, and fluid if those companies want it or not. And I think, well, the right companies will find ways to jump on the train, and other companies will perish.

Will AI replace programmers?

Lex Fridman (03:00:55) You got to listen to what the people want. We talked about programming quite a bit, and a lot of folks that are developers are really worried about their jobs, about their… About the future of programming. Do you think AI replaces programmers completely? Human programmers?
Peter Steinberger (03:01:11) I mean, we’re definitely going in that direction. Programming is just a part of building products. So maybe, maybe AI does replace programmers eventually. But there’s so much more to that art. Like, what do you actually wanna build? How should it feel? How’s the architecture? I don’t think agents will replace all of that. Yeah, like, just the, the actual art of programming, it will, it will stay there, but it’s, it’s gonna be like knitting. You know? Like, people do that because they like it, not because it makes any sense. So the… I read this article this morning about someone that it’s okay to mourn our craft. And I can…
Peter Steinberger (03:02:04) A part of me very strongly resonates with that because in my past I, I spent a lot of time tinkering, just being really deep in the flow and just, like, cranking out code and, like, finding really beautiful solutions. And yes, in a way it’s, it’s sad because that will go away. And I also get a lot of joy out of just writing code and being really deep in my thoughts and forgetting time and space and just being in this beautiful state of flow. But you can get the same state of flow… I get a similar state of flow by working with agents and building and thinking really hard about problems. It is different-… but… And it’s okay to mourn it, but I mean, that’s not something we can fight. Like, there is… the world for a long time had a…
Peter Steinberger (03:03:06) there was a lack of intelligence, if you s- if you see it like that, of people building things, and that’s why salaries of software developers reached stupidly high amounts and then will go away. There will still be a lot of demand for people that understand how to build things. It’s just that all this tokenized intelligence enables people to do a lot more, a lot faster. And it will be even more… even faster and even more because those things are continuously improving. We had similar things when… I mean, it’s probably not a perfect analogy, but when we created the steam engine, and they built all these factories and replaced a lot of manual labor, and then people revolted and broke the machines.
Peter Steinberger (03:04:04) Um, I- I can relate that if you very deeply identify that you are a programmer, that it’s scary and that it’s threatening because what you like and what you’re really good at is now being done by a soulless or not entity. But I don’t think you’re just a programmer. That’s a very limiting view of your craft. You are, you are still a builder.
Lex Fridman (03:04:40) Yeah, there’s a couple of things I want to say. So one is, I never… As you’re articulating this beautifully, I no- I’m realizing I never thought I would… the thing I love doing would be the thing that gets replaced. You hear these stories about these, like you said, with the steam engine. I’ve, I’ve spent so many, I don’t know, maybe thousands of hours poring over code and putting my heart and soul and, like, and just, like, some of my most painful and happiest moments were alone behind… I, I was an Emacs person for a long time. Man, Emacs. And, and then there’s an identity and there’s meaning, and there’s… Like, when I walk about the world, I don’t say it out loud, but I think of myself as a programmer. And to have that in a matter of months…
Lex Fridman (03:05:31) I mean, like you mentioned, April to November, it really is a leap that happened, a shift that’s happening. To have that completely replaced is is painful. It’s, it’s truly painful. But I also think programmers, builders more broadly, but what is, what is the act of programming? I, I think programmers are generally best equipped at this moment in history to learn the language, to empathize with agents, to learn the language of agents. To feel the CLI.
Peter Steinberger (03:06:10) Yeah.
Lex Fridman (03:06:11) Like, like to understand what is the thing you need, you the agent, need to do this task the best?
Peter Steinberger (03:06:21) I think at some point it’s just gonna be called coding again, and it’s just gonna be the new normal.
Lex Fridman (03:06:25) Yeah.
Peter Steinberger (03:06:25) And yet, while I don’t write the code, I very much feel like I’m in the driver’s seat and I am, I am writing the code, you know? It’s just-
Lex Fridman (03:06:37) You’ll still be a programmer. It’s just the activity of a programmer is, is different.
Peter Steinberger (03:06:41) Yeah, and because on X, the bubble, I mean, is mostly positive. On, on Mastodon and Bluesky, I don’t… I also use it less because oftentimes I got attacked for my blog posts. And I, I had stronger reactions in the past, now I can sympathize with those people more ’cause, in a way I get it. It… In a way, I also don’t get it because it’s very unfair to grab onto the person that you see right now and unload all your fear and hate. It’s gonna be a change and it’s gonna be challenging, but it’s also… I don’t know. I find it incredibly fun and, and, and gratifying. And I can, I can use the new time to focus on much more details. I think the level of expectation of what we build is also rising because it’s just now… The default is now so much easier, so software is changing in many ways.
Peter Steinberger (03:07:45) There’s gonna be a lot more. And then you have all these people that are screaming, “Oh yeah, but what about the water?” You know? Like, I did a conference in Italy about the, the state of AI, and m- my whole motivation was to push people away from, don’t see yourself as an iOS developer anymore. You’re now a builder, and you can use your skills in many more ways. Also because apps are slowly going away. People didn’t like that. Like a lot of people didn’t like what I had to say. And I don’t think I was hyperbole, I was just like, “This is how I see the future.” Maybe this is not how it’s going to be, but I’m pretty sure a version of that will happen.
Peter Steinberger (03:08:30) And the first question I got was, “Yeah, but what about the insane water use on data centers?” But then you actually sit down and do the maths, and then for most people if you just skip one burger per month, that compensates the, the CO2 output, or, like, the water use in equivalent of tokens. I mean, the maths is, is… the maths is tricky, and it depends if you add pre-training, then maybe it’s more than just one patty…. but it’s not off by a factor of 100, you know? So, so the… or like golf is still using way more water than all data centers together. So are you also hating people that play golf? Those people grab on anything that they think is bad about AI without seeing the potential things that might be good about AI.
Lex Fridman (03:09:23) Mm-hmm.
Peter Steinberger (03:09:24) And I’m not saying everything’s good. It’s certainly gonna be a very transformative technology for our society.
Lex Fridman (03:09:32) There’s to steel man the, the criticism in general, I do wanna say in my experience with Silicon Valley there’s a bit of a bubble in the sense that there’s a kind of excitement and an over-focus about the positive that the technology can bring.
Peter Steinberger (03:09:54) Yeah.
Lex Fridman (03:09:55) And… which is great. It’s great to focus on… N- not to, not to be paralyzed by fear and fear-mongering and so on, but there’s also within that excitement, and within everybody talking just to each other, there’s a dismissal of the basic human experience across the United States and the Midwest, across the world. Including the programmers we mentioned, including all the people that are gonna lose their jobs, including the s- the measurable pain and suffering that happens at the short-term scale when there’s change of any kind. Especially large-scale transformative change that we’re about to face if what we’re talking about will materialize. And so to ha- having a bit of that humility and awareness about the tools you’re building, they’re going to cause pain.
Lex Fridman (03:10:43) They will long term hopefully bring about a better world, and even more opportunities-
Peter Steinberger (03:10:48) Yeah
Lex Fridman (03:10:48) … and even more awesomeness. But having that kind of like quiet moment often of, of respect for the pain that is going to be felt. And so not, not enough of that is, I think, done, so it’s, it’s good to have a bit of that.
Peter Steinberger (03:11:07) And then I also have to put against some of the emails I got where people told me they have a small business, and they’ve been struggling. And, and OpenClaw helped them automate a few of the tedious tasks from, from collecting invoices to like answering customer emails that then freed them up and like cost them a bit more joy in their life.
Lex Fridman (03:11:30) Mm-hmm.
Peter Steinberger (03:11:31) Or, or some emails where they told me that OpenClaw helped their disabled daughter. That she’s now empowered and feels she can do much more than before. Which is amazing, right? Because you could, you could do that before as well. The technology was there. I didn’t, I didn’t invent a whole new thing, but I made it a lot easier and more accessible, and that did show people the possibilities that they previously wouldn’t see. And now they apply it for good.
Lex Fridman (03:12:02) Mm-hmm.
Peter Steinberger (03:12:03) Or like also the fact that, yes, I, I, I suggest the, the, the latest and best models, but you can totally run this on free models. You can run this locally. You can run this on, on, on Keyme or other, other, other models that are way more accessible price-wise, and still have a, a very powerful system that might otherwise not be possible. Because other things like, I don’t know, Entropik’s CoWork is locked in into their space, so it’s not all black and white. There’s… I got a lot of emails that were heartwarming and amazing. And, and I don’t know, it just made me really happy.
Lex Fridman (03:12:48) Yeah, there’s a lot… It has brought joy into a lot of people’s lives. Not just, not just programmers. Like a lot of people’s lives. It’s, it’s, it’s beautiful to see. What gives you hope about this whole thing we have going on with human civilization?

Future of OpenClaw community

Peter Steinberger (03:13:03) I mean, I inspired so many people. There’s like… there’s this whole builder vibe again. People are now using AI in a more playful way and are discovering what it can do and how it can like help them in their life. And creating new places that are just sprawling of creativity. I don’t know. Like, there’s like ClawCoin in Vienna. There’s like 500 people. And there’s such a high percentage of people that uh, want to present, which is to me really surprising, because u- usually it’s quite hard to find people that want to like talk about what they built. And now it’s, there’s an abundance. So that gives me hope that we can, we can figure shit out.
Lex Fridman (03:14:00) And it makes it accessible to basically everybody.
Peter Steinberger (03:14:04) Yeah.
Lex Fridman (03:14:05) Just imagine all these people building, especially as you make it simpler and simpler, more secure. It’s like anybody who has ideas and can express those ideas in language can build. That’s crazy.
Peter Steinberger (03:14:22) Yeah, that’s ultimately power to the people, and one of the beauty, the beautiful things that come out of AI. Not just, not just a slop generator.
Lex Fridman (03:14:36) Well, Mr. Clawfather, I just realized when I said that in the beginning, I violated two trademarks, because there’s also the Godfather. I’m getting sued by everybody. You’re a wonderful human being. You’ve created something really special, a special community, a special product, a special set of ideas. Plus, the entire… the humor, the good vibes, the inspiration of all these people building, the excitement to build. So I’m truly grateful for everything you’ve been doing and for who you are, and for sitting down to talk with me today. Thank you, brother.
Peter Steinberger (03:15:14) Thanks for giving me the chance to tell my story.
Lex Fridman (03:15:17) Thanks for listening to this conversation with Peter Steinberger. To support this podcast, please check out our sponsors in the description, where you can also find links to contact me, ask questions, give feedback and so on. And now let me leave you with some words from Voltaire. “With great power comes great responsibility.” Thank you for listening, and hope to see you next time.