Transcript for George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God | Lex Fridman Podcast #387

This is a transcript of Lex Fridman Podcast #387 with George Hotz. The timestamps in the transcript are clickable links that take you directly to that point in the main video. Please note that the transcript is human generated, and may have errors. Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation. Click link to jump approximately to that part in the transcript:

Introduction

Lex Fridman (00:00:00) What possible ideas do you have for how human species ends?
George Hotz (00:00:03) Sure. I think the most obvious way to me is wire heading. We end up amusing ourselves to death. We end up all staring at that infinite TikTok and forgetting to eat. Maybe it’s even more benign than this. Maybe we all just stop reproducing. Now, to be fair, it’s probably hard to get all of humanity.
Lex Fridman (00:00:27) Yeah. The interesting thing about humanity is the diversity in it.
George Hotz (00:00:30) Oh, yeah.
Lex Fridman (00:00:31) Organisms in general. There’s a lot of weirdos out there, two of them are sitting here.
George Hotz (00:00:36) I mean, diversity in humanity is-
Lex Fridman (00:00:38) With due respect.
George Hotz (00:00:40) I wish I was more weird.
Lex Fridman (00:00:44) The following is a conversation with George Hotz, his third time on this podcast. He’s the founder of Comma.ai that seeks to solve autonomous driving and is the founder of a new company called tiny corp that created tinygrad, a neural network framework that is extremely simple with the goal of making it run on any device by any human easily and efficiently. As you know, George also did a large number of fun and amazing things from hacking the iPhone to recently joining Twitter for a bit as a “intern”, making the case for refactoring the Twitter code base.
(00:01:23) In general he’s a fascinating engineer and human being, and one of my favorite people to talk to. This is a Lex Fridman podcast. To support it please check out our sponsors in the description. Now, dear friends, here’s George Hotz. You mentioned something in a stream about the philosophical nature of time. Let’s start with a wild question. Do you think time is an illusion?

Time is an illusion

George Hotz (00:01:47) You know, I sell phone calls to Comma for a thousand dollars and some guy called me. It’s a thousand dollars. You can talk to me for half an hour. He is like, “Yeah, okay. Time doesn’t exist and I really wanted to share this with you.” I’m like, “Oh, what do you mean time doesn’t exist?” I think time is a useful model, whether it exists or not. Right. Does quantum physics exist? Well, it doesn’t matter. It’s about whether it’s a useful model to describe reality. Is time maybe compressive?
Lex Fridman (00:02:25) Do you think there is an objective reality or is everything just useful models? Underneath it all is there an actual thing that we’re constructing models for?
George Hotz (00:02:35) I don’t know.
Lex Fridman (00:02:39) I was hoping you would know.
George Hotz (00:02:40) I don’t think it matters.
Lex Fridman (00:02:42) I mean, this connects to the models of constructive reality with machine learning, right?
George Hotz (00:02:47) Sure.
Lex Fridman (00:02:49) Is it just nice to have useful approximations of the world such that we can do something with it?
George Hotz (00:02:55) There are things that are real. [inaudible 00:02:57] complexity is real.
Lex Fridman (00:02:59) Yeah.
George Hotz (00:02:59) Yeah. The compressive-
Lex Fridman (00:03:00) Math.
George Hotz (00:03:02) Math is real. Yeah.
Lex Fridman (00:03:03) Should be a T-shirt.
George Hotz (00:03:05) I think hard things are actually hard. I don’t think P equals NP.
Lex Fridman (00:03:09) Ooh. Strong words.
George Hotz (00:03:10) Well, I think that’s the majority. I do think factoring is in P.
Lex Fridman (00:03:14) I don’t think you’re the person that follows the majority in all walks of life.
George Hotz (00:03:18) For that one I do
Lex Fridman (00:03:19) Yeah. In theoretical computer science, you’re one of the sheep. All right. To you time is a useful model.
George Hotz (00:03:28) Sure.
Lex Fridman (00:03:29) What were you talking about on the stream about time? Are you made of time?
George Hotz (00:03:33) If I remembered half the things I said on stream. Someday someone’s going to make a model of all of it and it’s going to come back to haunt me.
Lex Fridman (00:03:40) Someday soon?
George Hotz (00:03:41) Yeah, probably.
Lex Fridman (00:03:42) Would that be exciting to you or sad that there’s a George Hotz model?
George Hotz (00:03:48) I mean, the question is when the George Hotz model is better than George Hotz, like I am declining and the model is growing.
Lex Fridman (00:03:54) What is the metric by which you measure better or worse in that, if you are competing with yourself?
George Hotz (00:04:00) Maybe you can just play a game where you have the George Hotz answer and the George Hotz model answer and ask which people prefer.
Lex Fridman (00:04:06) People close to you or strangers?
George Hotz (00:04:09) Either one. It will hurt more when it’s people close to me, but both will be overtaken by the George Hotz model.
Lex Fridman (00:04:16) It’d be quite painful. Loved ones, family members would rather have the model over for Thanksgiving than you or significant others would rather sext with the large language model version of you.
George Hotz (00:04:35) Especially when it’s fine-tuned to their preferences.
Lex Fridman (00:04:39) Yeah. Well, that’s what we’re doing in a relationship. We’re just fine-tuning ourselves, but we’re inefficient with it because we’re selfish and greedy and so on. Language models can fine-tune more efficiently, more selflessly.
George Hotz (00:04:51) There’s a Star Trek Voyager episode where Kathryn Janeway lost in the delta quadrant makes herself a lover on the Holodeck, and the lover falls asleep on her arm and he snores a little bit. Janeway edits the program to remove that. Then of course the realization is, wait, this person’s terrible. It is actually all their nuances and quirks and slight annoyances that make this relationship worthwhile. I don’t think we’re going to realize that until it’s too late.
Lex Fridman (00:05:24) Well, I think a large language model could incorporate the flaws and the quirks and all that kind of stuff.
George Hotz (00:05:30) Just the perfect amount of quirks and flaws to make you charming without crossing the line.
Lex Fridman (00:05:36) Yeah, and that’s probably a good approximation of the percent of time the language model should be cranky or an asshole or jealous or all this kind of stuff.
George Hotz (00:05:52) Of course it can and it will. All that difficulty at that point is artificial. There’s no more real difficulty.
Lex Fridman (00:05:59) What’s the difference between real and artificial?
George Hotz (00:06:01) Artificial difficulty is difficulty that’s like constructed or could be turned off with a knob. Real difficulty is like you’re in the woods and you got to survive.
Lex Fridman (00:06:11) If something cannot be turned off with a knob it’s real?
George Hotz (00:06:16) Yeah, I think so. I mean, you can’t get out of this by smashing the knob with a hammer. I mean, maybe you can, Into the Wild when Alexander Supertramp, he wants to explore something that’s never been explored before, but it’s the nineties. Everything’s been explored. He’s like, “Well, I’m just not going to bring a map.”
Lex Fridman (00:06:36) Yeah.
George Hotz (00:06:36) I mean, no, you’re not exploring. You should have brought a map dude. You died. There was a bridge a mile from where you were camping.
Lex Fridman (00:06:44) How does that connect to the metaphor of the knob?
George Hotz (00:06:46) By not bringing the map, you didn’t become an explorer. You just smashed the thing.
Lex Fridman (00:06:53) Yeah.
George Hotz (00:06:53) Yeah. The difficulty is still artificial.
Lex Fridman (00:06:56) You failed before you started. What if we just don’t have access to the knob?
George Hotz (00:07:00) Well, that maybe is even scarier. We already exist in a world of nature, and nature has been fine-tuned over billions of years. To have humans build something and then throw the knob away in some grand romantic gesture is horrifying.
Lex Fridman (00:07:21) Do you think of us humans as individuals that are born and die or are we just all part of one living organism that is earth, that is nature?
George Hotz (00:07:33) I don’t think there’s a clear line there. I think it’s all kind of just fuzzy. I don’t know. I mean, I don’t think I’m conscious. I don’t think I’m anything. I think I’m just a computer program.
Lex Fridman (00:07:44) It’s all computation, everything running in your head is just computation.
George Hotz (00:07:49) Everything running in the universe is computation, I think. I believe the extended [inaudible 00:07:53] thesis.
Lex Fridman (00:07:56) There seems to be an embodiment to your particular computation. There’s a consistency.
George Hotz (00:08:00) Well, yeah, but I mean, models have consistency too.
Lex Fridman (00:08:04) Yeah.
George Hotz (00:08:05) Models that have been RLHF’d will continually say like, well, how do I murder ethnic minorities? Oh, well, I can’t let you do that, Hal. There’s a consistency to that behavior.
Lex Fridman (00:08:15) It’s all RLHF. We RLHF each other. We provide human feedback and thereby fine-tune these little pockets of computation. It’s still unclear why that pocket of computation stays with you for years. You have this consistent set of physics, biology, whatever you call the neurons firing like the electrical signals, the mechanical signals, all of that that seems to stay there. It contains information. It stores information, and that information permeates through time and stays with you. There’s like memory, there’s like sticky.
George Hotz (00:09:01) To be fair, a lot of the models we’re building today are very… Even RLHF is nowhere near as complex as the human loss function.
Lex Fridman (00:09:08) Reinforcement learning with human feedback.
George Hotz (00:09:11) When I talked about will GPT12 be AGI, my answer is no. Of course not. I mean, cross-entropy loss is never going to get you there. You need probably RL in fancy environments in order to get something that would be considered AGI-like. To ask the question about why? I don’t know. It’s just some quirk of evolution. I don’t think there’s anything particularly special about where I ended up, where humans ended up.
Lex Fridman (00:09:40) Okay, we have human level intelligence. Would you call that AGI, whatever we have, GI?
George Hotz (00:09:47) Look, actually, I don’t really even like the word AGI, but general intelligence is defined to be whatever humans have.
Lex Fridman (00:09:55) Okay, so why can GPT-12 not get us to AGI? Can we just linger on that?
George Hotz (00:10:02) If your loss function is categorical cross-entropy, if your loss function is just try to maximize compression. I have a SoundCloud I rap and I tried to get Chat-GPT to help me write raps and the raps that it wrote sounded like YouTube comment raps. You can go on any rap beat online and you can see what people put in the comments. It’s the most mid quality rap you can find.
Lex Fridman (00:10:23) Is mid good or bad?
George Hotz (00:10:24) Mid is bad.
Lex Fridman (00:10:25) Mid is bad.
George Hotz (00:10:25) It’s like mid.
Lex Fridman (00:10:27) Every time I talk to you, I learn new words. Mid.
George Hotz (00:10:32) Mid. Yeah.
Lex Fridman (00:10:35) I was like, is it like basic? Is that what mid means?
George Hotz (00:10:37) Kind of. It’s like middle of the curve, right?
Lex Fridman (00:10:39) Yeah.
George Hotz (00:10:40) There’s like that intelligence curve and you have the dumb guy, the smart guy, and then the mid guy. Actually being the mid guy is the worst. The smart guy is like I put all my money in Bitcoin. The mid guy is like, “You can’t put money in Bitcoin. It’s not real money.”
Lex Fridman (00:10:55) All of it is a genius meme. That’s another interesting one. Memes, the humor, the idea, the absurdity encapsulated in a single image and it just propagates virally between all of our brains. I didn’t get much sleep last night, so I sound like I’m high. I swear I’m not. Do you think we have ideas or ideas have us?

Memes

George Hotz (00:11:24) I think that we’re going to get super scary memes once the AIs actually are superhuman.
Lex Fridman (00:11:30) You think AI will generate memes?
George Hotz (00:11:31) Of course.
Lex Fridman (00:11:32) You think it’ll make humans laugh?
George Hotz (00:11:35) I think it’s worse than that. Infinite Jest, it’s introduced in the first 50 pages, is about a tape that once you watch it once you only ever want to watch that tape. In fact, you want to watch the tape so much that someone says, “Okay, here’s a hack saw. Cut off your pinky and then I’ll let you watch the tape again.” You’ll do it. We’re actually going to build that, I think, but it’s not going to be one static tape. I think the human brain is too complex to be stuck in one static tape like that. If you look at ant brains, maybe they can be stuck on a static tape, but we’re going to build that using generative models. We’re going to build the TikTok that you actually can’t look away from.
Lex Fridman (00:12:16) TikTok is already pretty close there, but the generation is done by humans. The algorithm is just doing their recommendation. If the algorithm is also able to do the generation.
George Hotz (00:12:25) Well, it’s a question about how much intelligence is behind it. The content is being generated by let’s say, one humanity worth of intelligence, and you can quantify a humanity, its exaflops, [inaudible 00:12:40], but you can quantify it. Once that generation is being done by a hundred humanities, you’re done.
Lex Fridman (00:12:48) It’s actually scale that’s the problem, but also speed. Yeah. What if it’s manipulating the very limited human dopamine engine, so porn? Imagine just TikTok, but for porn.
George Hotz (00:13:05) Yeah.
Lex Fridman (00:13:06) It’s like a brave new world.
George Hotz (00:13:08) I don’t even know what it’ll look like. Again, you can’t imagine the behaviors of something smarter than you, but a super intelligent, an agent that just dominates your intelligence so much will be able to completely manipulate you.
Lex Fridman (00:13:24) Is it possible that it won’t really manipulate? It’ll just move past us. It’ll just exist the way water exists or the air exists.
George Hotz (00:13:33) You see, and that’s the whole AI safety thing. It’s not the machine that’s going to do that. It’s other humans using the machine that are going to do that to you.
Lex Fridman (00:13:44) Because the machine is not interested in hurting humans. It’s just…
George Hotz (00:13:47) The machine is a machine, but the human gets the machine and there’s a lot of humans out there very interested in manipulating you.

Eliezer Yudkowsky

Lex Fridman (00:13:55) Well, let me bring up, Eliezer Yudkowsky who recently sat where you’re sitting. He thinks that AI will almost surely kill everyone. Do you agree with him or not?
George Hotz (00:14:09) Yes, but maybe for a different reason.
Lex Fridman (00:14:14) Then I’ll try to get you to find hope or we could find a note to that answer. But why yes?
George Hotz (00:14:23) Okay. Why didn’t nuclear weapons kill everyone?
Lex Fridman (00:14:26) That’s a good question.
George Hotz (00:14:27) I think there’s an answer. I think it’s actually very hard to deploy nuclear weapons tactically. It’s very hard to accomplish tactical objectives. Great. I can nuke their country. I have an irradiated pile of rubble. I don’t want that.
Lex Fridman (00:14:39) Why not?
George Hotz (00:14:40) Why don’t I want an irradiated pile of rubble?
Lex Fridman (00:14:43) Yeah.
George Hotz (00:14:43) For all the reasons no one wants an irradiated pile of rubble.
Lex Fridman (00:14:46) Oh, because you can’t use that land for resources. You can’t populate the land.
George Hotz (00:14:52) Yeah. Well, what you want, a total victory in a war is not usually the irradiation and eradication of the people there. It’s the subjugation and domination of the people.
Lex Fridman (00:15:03) Okay. You can’t use this strategically, tactically in a war to help gain a military advantage. It’s all complete destruction. All right.
George Hotz (00:15:16) Yeah.
Lex Fridman (00:15:16) There’s egos involved. It’s still surprising that nobody pressed the big red button.
George Hotz (00:15:22) It’s somewhat surprising. You see, it’s the little red button that’s going to be pressed with AI, and that’s why we die. It’s not because the AI, if there’s anything in the nature of AI, it’s just the nature of humanity.
Lex Fridman (00:15:37) What’s the algorithm behind the little red button? What possible ideas do you have for how human species ends?
George Hotz (00:15:45) Sure. I think the most obvious way to me is wire heading. We end up amusing ourselves to death. We end up all staring at that infinite TikTok and forgetting to eat. Maybe it’s even more benign than this. Maybe we all just stop reproducing. Now, to be fair, it’s probably hard to get all of humanity.
Lex Fridman (00:16:10) Yeah.
George Hotz (00:16:11) Yeah. It probably is.
Lex Fridman (00:16:15) The interesting thing about humanity is the diversity in it.
George Hotz (00:16:17) Oh yeah.
Lex Fridman (00:16:18) Organisms in general. There’s a lot of weirdos out there. Well, two of them are sitting here.
George Hotz (00:16:23) I mean, diversity in humanity is-
Lex Fridman (00:16:25) With due respect.
George Hotz (00:16:27) I wish I was more weird. No, look, I’m drinking Smart water, man. That’s like a Coca-Cola product, right?
Lex Fridman (00:16:33) You went corporate George Hotz.
George Hotz (00:16:35) Yeah, I went corporate. No, the amount of diversity and humanity I think is decreasing. Just like all the other biodiversity on the planet.
Lex Fridman (00:16:42) Oh boy. Yeah.
George Hotz (00:16:43) Right.
Lex Fridman (00:16:44) Social media’s not helping.
George Hotz (00:16:45) Go eat McDonald’s in China.
Lex Fridman (00:16:47) Yeah.
George Hotz (00:16:49) Yeah. No, it’s the interconnectedness that’s doing it.
Lex Fridman (00:16:54) Oh, that’s interesting. Everybody starts relying on the connectivity of the internet. Over time, that reduces the diversity, the intellectual diversity, and then that gets everybody into a funnel. There’s still going to be a guy in Texas.
George Hotz (00:17:08) There is.
Lex Fridman (00:17:09) And a bunker.
George Hotz (00:17:10) To be fair, do I think AI kills us all? I think AI kills everything we call society today. I do not think it actually kills the human species. I think that’s actually incredibly hard to do.
Lex Fridman (00:17:22) Yeah, but society, if we start over, that’s tricky. Most of us don’t know how to do most things.
George Hotz (00:17:28) Yeah, but some of us do, and they’ll be okay and they’ll rebuild after the great AI.
Lex Fridman (00:17:36) What’s rebuilding look like? How much do we lose? What has human civilization done that’s interesting? Combustion engine, electricity. So power and energy. That’s interesting. How to harness energy.
George Hotz (00:17:54) Whoa, whoa, whoa, whoa. They’re going to be religiously against that.
Lex Fridman (00:17:58) Are they going to get back to fire?
George Hotz (00:18:02) Sure. I mean, it’s be like some kind of Amish looking kind of thing. I think they’re going to have very strong taboos against technology.
Lex Fridman (00:18:13) Technology is almost like a new religion. Technology is the devil and nature is God.
George Hotz (00:18:20) Sure.
Lex Fridman (00:18:20) Closer to nature. Can you really get away from AI if it destroyed 99% of the human species, isn’t somehow have a hold like a stronghold?
George Hotz (00:18:30) Well, what’s interesting about everything we build, I think we’re going to build super intelligence before we build any sort of robustness in the AI. We cannot build an AI that is capable of going out into nature and surviving like a bird. A bird is an incredibly robust organism. We’ve built nothing like this. We haven’t built a machine that’s capable of reproducing.
Lex Fridman (00:18:58) I work with Lego robots a lot now. I have a bunch of them. They’re mobile. They can’t reproduce. All they need is, I guess you’re saying they can’t repair themselves. If you have a large number, if you have a hundred million of them-
George Hotz (00:19:13) Let’s just focus on them reproducing. Do they have microchips in them?
Lex Fridman (00:19:16) Mm-hmm (affirmative).
George Hotz (00:19:16) Okay. Then do they include a fab?
Lex Fridman (00:19:20) No.
George Hotz (00:19:21) Then how are they going to reproduce?
Lex Fridman (00:19:22) Well, it doesn’t have to be all on board. They can go to a factory, to a repair shop.
George Hotz (00:19:29) Yeah, but then you’re really moving away from robustness.
Lex Fridman (00:19:33) Yes.
George Hotz (00:19:33) All of life is capable of reproducing without needing to go to a repair shop. Life will continue to reproduce in the complete absence of civilization. Robots will not. If the AI apocalypse happens, I mean the AIs are going to probably die out because I think we’re going to get, again, super intelligence long before we get robustness.
Lex Fridman (00:19:55) What about if you just improve the fab to where you just have a 3D printer that can always help you?
George Hotz (00:20:03) Well, that’d be very interesting. I’m interested in building that.
Lex Fridman (00:20:06) Of course, you are. How difficult is that problem to have a robot that basically can build itself?
George Hotz (00:20:15) Very, very hard.
Lex Fridman (00:20:16) I think you’ve mentioned this to me or somewhere where people think it’s easy conceptually.
George Hotz (00:20:24) Then they remember that you’re going to have to have a fab.
Lex Fridman (00:20:27) Yeah, on board.
George Hotz (00:20:30) Of course.
Lex Fridman (00:20:30) 3D printer that prints a 3D printer.
George Hotz (00:20:34) Yeah.
Lex Fridman (00:20:34) On legs. Why’s that hard?
George Hotz (00:20:37) Well, I mean, a 3D printer is a very simple machine, right? Okay, you’re going to print chips, you’re going to have an atomic printer. How are you going to dope the silicon?
Lex Fridman (00:20:47) Yeah.
George Hotz (00:20:48) Right. How you going to etch the silicon?
Lex Fridman (00:20:51) You’re going to have a very interesting kind of fab if you want to have a lot of computation on board. You can do structural type of robots that are dumb.
George Hotz (00:21:04) Yeah, but structural type of robots aren’t going to have the intelligence required to survive in any complex environment.
Lex Fridman (00:21:11) What about like ants type of systems? We have trillions of them.
George Hotz (00:21:15) I don’t think this works. I mean, again, ants at their very core are made up of cells that are capable of individually reproducing.
Lex Fridman (00:21:22) They’re doing quite a lot of computation that we’re taking for granted.
George Hotz (00:21:26) It’s not even just the computation. It’s that reproduction is so inherent. There’s two stacks of life in the world. There’s the biological stack and the silicon stack. The biological stack starts with reproduction. Reproduction is at the absolute core. The first proto-RNA organisms were capable of reproducing. The silicon stack, despite, as far as it’s come, is nowhere near being able to reproduce.
Lex Fridman (00:21:51) Yeah, So the fab movement, digital fabrication, fabrication in the full range of what that means is still in the early stages.
George Hotz (00:22:04) Yeah.
Lex Fridman (00:22:04) You’re interested in this world?
George Hotz (00:22:06) Even if you did put a fab on the machine, let’s say, okay, yeah, we can build fabs. We know how to do that as humanity. We can probably put all the precursors that build all the machines in the fabs also in the machine. First off, this machine’s going to be absolutely massive. I mean, we almost have a… Think of the size of the thing required to reproduce a machine today. Is our civilization capable of reproduction? Can we reproduce our civilization on Mars?
Lex Fridman (00:22:34) If we were to construct a machine that is made up of humans, like a company that can reproduce itself?
George Hotz (00:22:40) Yeah.
Lex Fridman (00:22:40) I don’t know. It feels like 115 people.
George Hotz (00:22:47) I think it’s so much harder than that.
Lex Fridman (00:22:50) 120? I’m looking for a number.
George Hotz (00:22:52) Let’s see. I believe that Twitter can be run by 50 people. I think that this is going to take most of, it’s just most of society. We live in one globalized world now.
Lex Fridman (00:23:04) No, but you’re not interested in running Twitter, you’re interested in seeding. You want to seed a civilization and then because humans can like have sex.
George Hotz (00:23:14) Yeah. Okay. You’re talking about the humans reproducing and basically what’s the smallest self-sustaining colony of humans?
Lex Fridman (00:23:19) Yeah.
George Hotz (00:23:20) Yeah. Okay, fine but they’re not going to be making five nanometer chips.
Lex Fridman (00:23:22) Over time they will. We have to expand our conception of time here going back to the original timescale. I mean, over across maybe a hundred generations we’re back to making chips. No? If you seed the colony correctly.
George Hotz (00:23:40) Maybe, or maybe they’ll watch our colony die out over here and be like, “We’re not making chips. Don’t make chips.”
Lex Fridman (00:23:46) No, but you have to seed that colony correctly.
George Hotz (00:23:48) Whatever you do, don’t make chips. Chips are what led to their downfall.
Lex Fridman (00:23:54) Well, that is the thing that humans do. They construct a devil a good thing and a bad thing, and they really stick by that and then they murder each other over that. There’s always one asshole in the room who murders everybody and usually makes tattoos and nice branding with flags and stuff.
George Hotz (00:24:10) Do you need that asshole, that’s the question. Humanity works really hard today to get rid of that asshole, but I think they might be important.
Lex Fridman (00:24:16) Yeah. This whole freedom of speech thing, it’s the freedom of being an asshole seems kind of important.
George Hotz (00:24:22) That’s right.
Lex Fridman (00:24:23) Man. This thing, this fab, this human fab that we constructed, this human civilization is pretty interesting. Now it’s building artificial copies of itself or artificial copies of various aspects of itself that seem interesting like intelligence. I wonder where that goes.
George Hotz (00:24:44) I like to think it’s just another stack for life. We have the biostack life. We’re a biostack life, and then the silicon stack life.
Lex Fridman (00:24:50) It seems like the ceiling, or there might not be a ceiling, or at least the ceiling is much higher for the silicon stack.
George Hotz (00:24:57) Oh, no. We don’t know what the ceiling is for the biostack either. The biostack just seems to move slower. You have Moore’s law, which is not dead despite many proclamations.
Lex Fridman (00:25:09) In the biostack or the silicon stack?
George Hotz (00:25:11) In the silicon stack. You don’t have anything like this in the biostack. I have a meme that I posted. I tried to make a meme. It didn’t work too well, but I posted a picture of Ronald Reagan and Joe Biden, and you look, this is 1980 and this is 2020.
Lex Fridman (00:25:24) Yeah.
George Hotz (00:25:24) These two humans are basically the same, right? No, there’s been no change in humans in the last 40 years. Then I posted a computer from 1980 in a computer from 2020. Wow.
Lex Fridman (00:25:41) Yeah. With their early stages, which is why you said, when you said the size of the fab required to make another fab is very large right now.
George Hotz (00:25:52) Yeah.
Lex Fridman (00:25:53) Computers were very large 80 years ago, and they got pretty tiny and people are starting to want to wear them on their face in order to escape reality. That’s a thing. In order to live inside the computer, but a screen right here, I don’t have to see the rest of you assholes.
George Hotz (00:26:18) I’ve been ready for a long time.

Virtual reality

Lex Fridman (00:26:19) You like virtual reality?
George Hotz (00:26:20) I love it.
Lex Fridman (00:26:22) Do you want to live there?
George Hotz (00:26:23) Yeah.
Lex Fridman (00:26:25) Yeah. Part of me does too. How far away are we do you think?
George Hotz (00:26:31) Judging from what you can buy today? Far, very far.
Lex Fridman (00:26:35) I got to tell you that I had the experience of Meta’s Codec avatar where it’s a ultra-high resolution scan. It looked real.
George Hotz (00:26:51) I mean, the headsets just are not quite at eye resolution yet. I haven’t put on any headset where I’m like, “Oh, this could be the real world.” Whereas when I put good headphones on, audio is there. We can reproduce audio that I’m like, “I’m actually in a jungle right now. If I close my eyes, I can’t tell I’m not.”
Lex Fridman (00:27:09) Yeah. Then there’s also smell and all that kind of stuff.
George Hotz (00:27:11) Sure.
Lex Fridman (00:27:13) I don’t know. The power of imagination or the power of the mechanism in the human mind that fills the gaps that reaches and wants to make the thing you see in the virtual world real to you. I believe in that power.
George Hotz (00:27:29) Or humans want to believe.
Lex Fridman (00:27:30) Yeah. What if you’re lonely? What if you’re sad? What if you’re really struggling in life, and here’s a world where you don’t have to struggle anymore?
George Hotz (00:27:39) Humans want to believe so much that people think the large language models are conscious. That’s how much humans want to believe.
Lex Fridman (00:27:46) Strong words, he’s throwing left and right hooks. Why do you think large language models are not conscious?
George Hotz (00:27:53) I don’t think I’m conscious.
Lex Fridman (00:27:55) Oh, so what is consciousness then George Hotz?
George Hotz (00:27:58) It’s like what it seems to mean to people it’s just a word that atheists use for souls.
Lex Fridman (00:28:04) Sure. That doesn’t mean soul is not an interesting word.
George Hotz (00:28:08) If consciousness is a spectrum, I’m definitely way more conscious than the large language models are. I think the large language models are less conscious than a chicken.
Lex Fridman (00:28:19) When is the last time you’ve seen a chicken?
George Hotz (00:28:22) In Miami, a couple months ago.
Lex Fridman (00:28:26) No. A living chicken.
George Hotz (00:28:27) Just living chickens walking around Miami. It’s crazy.
Lex Fridman (00:28:30) Like on the street?
George Hotz (00:28:30) Yeah.
Lex Fridman (00:28:31) Like a chicken?
George Hotz (00:28:32) A chicken. Yeah.
Lex Fridman (00:28:36) All right. I was trying to call you out, like a good journalist, and I got shut down. Okay. You don’t think much about this subjective feeling that it feels like something to exist. Then as an observer, you can have a sense that an entity is not only intelligent, but has a subjective experience of its reality, like a self-awareness that is capable of suffering, of hurting, of being excited by the environment in a way that’s not merely an artificial response, but a deeply felt one.
George Hotz (00:29:22) Humans want to believe so much that if I took a rock and a Sharpie and drew a sad face on the rock, they’d think the rock is sad.
Lex Fridman (00:29:32) You’re saying when we look in the mirror, we apply the same smiley face with rock?
George Hotz (00:29:36) Pretty much, yeah.
Lex Fridman (00:29:38) Isn’t that weird though, that you’re not conscious?
George Hotz (00:29:42) No.
Lex Fridman (00:29:43) You do believe in consciousness?
George Hotz (00:29:45) Not really.
Lex Fridman (00:29:46) It’s unclear. Okay. To you it’s like a little symptom of the bigger thing that’s not that important.
George Hotz (00:29:53) Yeah. I mean, it’s interesting that the human systems seem to claim that they’re conscious, and I guess it says something in a straight up, even if you don’t believe in consciousness, what do people mean when they say consciousness? There’s definitely meanings to it.
Lex Fridman (00:30:06) What’s your favorite thing to eat?
George Hotz (00:30:11) Pizza.
Lex Fridman (00:30:12) Cheese pizza. What are the toppings?
George Hotz (00:30:13) I like cheese pizza. I like pepperoni.
Lex Fridman (00:30:14) Don’t say pineapple.
George Hotz (00:30:15) No, I don’t like pineapple.
Lex Fridman (00:30:16) Okay. Pepperoni pizza.
George Hotz (00:30:17) If they put any ham on it I’ll just feel bad.
Lex Fridman (00:30:20) What’s the best pizza? What are we talking about here? Do you like cheap, crappy pizza?
George Hotz (00:30:24) A Chicago deep dish cheese pizza. Oh, that’s my favorite.
Lex Fridman (00:30:27) There you go. You bite into a Chicago deep dish pizza, and it feels like, so you were starving, you haven’t eaten for 24 hours. You just bite in and you’re hanging out with somebody that matters a lot to you. You’re there with the pizza.
George Hotz (00:30:39) That sounds real nice, man.
Lex Fridman (00:30:40) Yeah. All right. It feels like something I’m George motherfucking Hotz eating a fucking Chicago deep dish pizza. There’s just the full peak living experience of being human, the top of the human condition.
George Hotz (00:30:57) Sure.
Lex Fridman (00:30:58) It feels like something to experience that.
George Hotz (00:31:00) Mm-hmm (affirmative).
Lex Fridman (00:31:02) Why does it feel like something? That’s consciousness, isn’t it?
George Hotz (00:31:06) If that’s the word you want to use to describe it. Sure. I’m not going to deny that that feeling exists. I’m not going to deny that I experienced that feeling. I guess what I take issue to is that there’s some like how does it feel to be a web server? Do 404s hurt?
Lex Fridman (00:31:23) Not yet.
George Hotz (00:31:24) How would you know what suffering looked like? Sure you can recognize a suffering dog because we’re the same stack as the dog. All the biostack stuff kind of, especially mammals. It’s really easy. You can…
Lex Fridman (00:31:35) Game recognizes game.
George Hotz (00:31:37) Yeah. Versus the silicon stack stuff it’s like, you have no idea. Wow the little thing has learned to mimic. Then I realized that that’s all we are too. Well, look, the little thing has learned to mimic.
Lex Fridman (00:31:54) Yeah. I guess, yeah. 404 could be suffering, but it’s so far from our kind-
Lex Fridman (00:32:03) … So far from our kind of living organism, our kind of stack. It feels like AI can start maybe mimicking the biological stack better, better, better. It’s trained.
George Hotz (00:32:13) We trained it, yeah.
Lex Fridman (00:32:15) In that, maybe that’s the definition of consciousness is the bio stack consciousness.
George Hotz (00:32:20) The definition of consciousness is how close something looks to human. Sure, I’ll give you that one.
Lex Fridman (00:32:24) No, how close something is to the human experience.
George Hotz (00:32:28) Sure. It’s a very anthropro-centric definition, but…
Lex Fridman (00:32:33) Well, that’s all we got.

AI friends

George Hotz (00:32:34) Sure. No. I think there’s a lot of value in it. Look, I just started my second company. My third company will be AI Girlfriends. I mean it.
Lex Fridman (00:32:43) I want to find out what your fourth company is after that.
George Hotz (00:32:46) Oh, wow.
Lex Fridman (00:32:46) I think once you have AI girlfriends, oh boy, does it get interesting. Well, maybe let’s go there. The relationships with AI, that’s creating human-like organisms. Part of being human is being conscious, is having the capacity to suffer, having the capacity to experience this life richly, in such a way that you can empathize, that AI system going to empathize with you, and you can empathize with it, or you can project your anthropomorphic sense of what the other entity is experiencing.
(00:33:22) An AI model would need to create that experience inside your mind. It doesn’t seem that difficult.
George Hotz (00:33:28) Yeah. Okay, so here’s where it actually gets totally different. When you interact with another human, you can make some assumptions.
Lex Fridman (00:33:37) Yeah.
George Hotz (00:33:38) When you interact with these models, you can’t. You can make some assumptions that other human experiences suffering and pleasure in a pretty similar way to you do, the golden rule applies. With an AI model, this isn’t really true. These large language models are good at fooling people, because they were trained on a whole bunch of human data and told to mimic it.
Lex Fridman (00:33:59) Yep, but if the AI system says, “Hi, my name is Samantha,” it has a backstory. “Went to college here and there,” maybe it’ll integrate this in the AI system.
George Hotz (00:34:11) I made some chatbots. I gave them back stories. It was lots of fun. I’m so happy when Lama came out.
Lex Fridman (00:34:16) Yeah. Well, we’ll talk about Lama, we’ll talk about all that. The rock with a smiley face, it seems pretty natural for you to anthropomorphize that thing and then start dating it. Before you know it, you’re married and have kids
George Hotz (00:34:33) With a rock?
Lex Fridman (00:34:34) With a rock, and there’s pictures on Instagram with you and a rock and a smiley face.
George Hotz (00:34:38) To be fair, something that people generally look for when they’re looking for someone to date is intelligence in some form. The rock doesn’t really have intelligence. Only a pretty desperate person would date a rock.
Lex Fridman (00:34:50) I think we’re all desperate, deep down.
George Hotz (00:34:52) Oh, not rock level desperate.
Lex Fridman (00:34:54) All right. Not rock level desperate, but AI level desperate. I don’t know. I think all of us have a deep loneliness. It just feels like the language models are there.
George Hotz (00:35:09) Oh, I agree. You know what? I won’t even say this so cynically. I will actually say this in a way that I want AI friends. I do.
Lex Fridman (00:35:14) Yeah.
George Hotz (00:35:16) I would love to. Again, the language models now are still a little… People are impressed with these GPT things, or the Copilot, the coding one. I’m like, “Okay, this is junior engineer level, and these people are Fiverr level artists and copywriters.” Okay, great. We got Fiverr and junior engineers. Okay, cool. This is just the start, and it will get better, right? I can’t wait to have AI friends who are more intelligent than I am.
Lex Fridman (00:35:50) Fiverr is just a temporary, it’s not the ceiling?
George Hotz (00:35:52) No, definitely not.
Lex Fridman (00:35:53) Does it count as cheating when you’re talking to an AI model? Emotional cheating?
George Hotz (00:36:03) That’s up to you and your human partner to define.
Lex Fridman (00:36:07) Oh, you have to. All right.
George Hotz (00:36:08) You to have that conversation, I guess.
Lex Fridman (00:36:12) All right. Integrate that with porn and all this stuff.
George Hotz (00:36:16) Well, no, it’s similar kind of to porn.
Lex Fridman (00:36:18) Yeah.
George Hotz (00:36:18) Yeah. I think people in relationships have different views on that.
Lex Fridman (00:36:23) Yeah, but most people don’t have serious, open conversations about all the different aspects of what’s cool and what’s not. It feels like AI is a really weird conversation to have.
George Hotz (00:36:38) The porn one is a good branching off.
Lex Fridman (00:36:40) For sure.
George Hotz (00:36:40) One of my scenarios that I put in my chatbot is a nice girl named Lexi, she’s 20. She just moved out to LA. She wanted to be an actress, but she started doing Only Fans instead. You’re on a date with her. Enjoy.
Lex Fridman (00:36:56) Oh, man. Yeah. If you’re actually dating somebody in real life, is that cheating? I feel like it gets a little weird.
George Hotz (00:37:05) Sure.
Lex Fridman (00:37:05) It gets real weird. It’s like, what are you allowed to say to an AI bot? Imagine having that conversation with a significant other.
George Hotz (00:37:11) These are all things for people to define in their relationships. What it means to be human is just going to start to get weird.
Lex Fridman (00:37:17) Especially online. How do you know? There’ll be moments when you’ll have what you think is a real human you’re interacting with on Twitter for years, and you realize it’s not.
George Hotz (00:37:28) I spread, I love this meme, heaven banning. You hear about shadow-banning?
Lex Fridman (00:37:33) Yeah.
George Hotz (00:37:34) Right. Shadow-banning, okay, you post, no one can see it. Heaven banning, you post. No one can see it, but a whole lot of AIs are spot up to interact with you.
Lex Fridman (00:37:44) Well, maybe that’s what the way human civilization ends is all of us are heaven banned.
George Hotz (00:37:48) There’s a great, it’s called My Little Pony Friendship is optimal. It’s a sci-fi story that explores this idea.
Lex Fridman (00:37:56) Friendship is Optimal.
George Hotz (00:37:57) Friendship is Optimal.
Lex Fridman (00:37:58) Yeah. I’d like to have some, at least on the intellectual realm, some AI friends that argue with me. The romantic realm is weird, definitely weird, but not out of the realm of the kind of weirdness that human civilization is capable of, I think.
George Hotz (00:38:20) Look, I want it. If no one else wants it, I want it.
Lex Fridman (00:38:23) Yeah. I think a lot of people probably want it. There’s a deep loneliness.
George Hotz (00:38:27) I’ll fill their loneliness, and it just will only advertise to you some of the time.
Lex Fridman (00:38:33) Yeah. Maybe the conceptions of monogamy change too. I grew up in a time, I value monogamy, but maybe that’s a silly notion when you have arbitrary number of AI systems.
George Hotz (00:38:43) Yeah, on this interesting path from rationality to polyamory. Yeah. That doesn’t make sense for me,
Lex Fridman (00:38:50) For you, but you’re just a biological organism who was born before the internet really took off.
George Hotz (00:38:58) The crazy thing is, culture is whatever we define it as. These things are not… [inaudible 00:39:04] a problem and moral philosophy, right? Okay. What might be that computers are capable of mimicking girlfriends perfectly. They passed the girlfriend Turing test, but that doesn’t say anything about ought.
(00:39:18) That doesn’t say anything about how we ought to respond to them as a civilization. That doesn’t say we ought to get rid of monogamy. Right. That’s a completely separate question, really, a religious one.
Lex Fridman (00:39:27) Girlfriend Turing test. I wonder what that looks like.
George Hotz (00:39:30) Girlfriend Turing test.
Lex Fridman (00:39:31) Are you writing that? Will you be the Alan Turing of the 21st century that writes the Girlfriend Turing test?
George Hotz (00:39:38) No, of course, my AI girlfriends, their goal is to pass the girlfriend Turing test.
Lex Fridman (00:39:43) No, but there should be a paper that kind of defines the test. The question is if it’s deeply personalized, or if there’s a common thing that really gets everybody.
George Hotz (00:39:55) Yeah. Look, we’re a company. We don’t have to get everybody. We just have to get a large enough clientele to stay with us.

tiny corp

Lex Fridman (00:40:01) I like how you’re already thinking company. All right. Before we go to company number three and company number four, let’s go to company number two.
George Hotz (00:40:09) All right.
Lex Fridman (00:40:09) Tiny Corp, possibly one of the greatest names of all time for a company. You’ve launched a new company called Tiny Corp that leads the development of Tinygrad. What’s the origin story of Tiny Corp and Tinygrad?
George Hotz (00:40:25) I started Tinygrad as a toy project, just to teach myself, okay, what is a convolution? What are all these options you can pass to them? What is the derivative of convolution? Very similar to Karpathy wrote Micrograd. I’m very similar. Then I started realizing, I started thinking about AI chips. I started thinking about chips that run AI. I was like, “Well, okay. This is going to be a really big problem. If Nvidia becomes a monopoly here, how long before Nvidia is nationalized?”
Lex Fridman (00:41:04) One of the reasons to start Tiny Corp is to challenge Nvidia.
George Hotz (00:41:10) It’s not so much to challenge Nvidia. Actually, I like Nvidia. It’s to make sure power stays decentralized.
Lex Fridman (00:41:21) Yeah. Here, it’s computational power. To you, Nvidia is kind of locking down the computational power of the world.
George Hotz (00:41:31) Nvidia becomes just like 10X better than everything else, you’re giving a big advantage to somebody who can secure Nvidia as a resource.
Lex Fridman (00:41:41) Yeah.
George Hotz (00:41:42) In fact, if Jensen watches this podcast, he may want to consider this. He may want to consider making sure his company’s not nationalized.
Lex Fridman (00:41:50) Do you think that’s an actual threat?
George Hotz (00:41:52) Oh, yes.
Lex Fridman (00:41:55) No, but there’s so much, there’s AMD.
George Hotz (00:41:57) We have Nvidia and AMD. Great.
Lex Fridman (00:42:00) All right. You don’t think there’s a push towards selling Google selling TPUs or something like this? You don’t think there’s a push for that?
George Hotz (00:42:10) Have you seen it? Google loves to rent you TPUs.
Lex Fridman (00:42:14) It doesn’t, you can’t buy it at Best Buy?
George Hotz (00:42:18) No.
Lex Fridman (00:42:18) Okay.
George Hotz (00:42:18) I started work on a chip. I was like, “Okay, what’s it going to take to make a chip?” My first notions were all completely wrong about why, about how you could improve on GPUs. I’ll take this, this is from Jim Keller on your podcast. This is one of my absolute favorite descriptions of computation. There’s three kinds of computation paradigms that are common in the world today.
(00:42:45) There’s CPUs, and CPUs can do everything. CPUs can do add and multiply. They can do load and store, and they can do compare and branch. When I say they can do these things, they can do them all fast. Compare and branch are unique to CPUs. What I mean by they can do them fast is they can do things like branch prediction, and speculative execution, and they spend tons of transistors on these super deep reorder buffers in order to make these things fast.
(00:43:09) Then you have a simpler computation model, GPUs. GPUs can’t really do compare and branch. They can, but it’s horrendously slow. GPUs can do arbitrary load and store. GPUs can do things like X, dereference Y, so they can fetch from arbitrary pieces of memory. They can fetch from memory that is defined by the contents of the data.
(00:43:27) The third model of computation is DSPs. DSPs are just a and multiply. They can do loads and stores, but only static load and stores. Only loads and stores that are known before the program runs. You look at neural networks today, and 95% of neural networks are all the DSP paradigm. They are just statically scheduled adds and multiplies. Tiny Corp really took this idea, and I’m still working on it to extend this as far as possible, every stage of the stack has Turing completeness.
(00:43:58) Python has Turing completeness, and then we take Python, we go into C++, which is Turing complete, and then maybe C++ calls into some CUDA kernels, which are Turing complete. The CUDA kernels go through LVM, which is Turing complete, into PTX, which is Turing complete, into SaaS, which is Turing complete, on a Turing complete processor. I want to get Turing completeness out of the stack entirely.
(00:44:15) Once you get rid of Turing completeness, you can reason about things. Rice’s Theorem and the halting problem do not apply to [inaudible 00:44:20] machines.
Lex Fridman (00:44:23) Okay. What’s the power and the value of getting Turing completeness out of, are we talking about the hardware or the software?
George Hotz (00:44:31) Every layer of the stack.
Lex Fridman (00:44:32) Every layer.
George Hotz (00:44:32) Every layer of the stack. Removing Turing completeness allows you to reason about things. The reason you need to do branch prediction in a CPU, and the reason it’s prediction, and the branch predictors are, I think they’re like 99% on CPUs. Why do they get 1% of them wrong? Well, they get 1% wrong because you can’t know. That’s the halting problem. It’s equivalent to the halting problem to say whether a branch is going to be taken or not.
(00:44:56) I can show that. The ADMO machine, the neural network runs the identical compute every time. The only thing that changes is the data. When you realize this, you think about, “Okay, how can we build a computer, and how can we build a stack that takes maximal advantage of this idea?”
(00:45:19) What makes Tinygrad different from other neural network libraries is it does not have a primitive operator even for matrix multiplication. This is every single one. They even have primitive operators for things like convolutions.
Lex Fridman (00:45:31) No MatMul?
George Hotz (00:45:32) No MatMul. Well, here’s what a MatMul is. I’ll use my hands to talk here. If you think about a cube, and I put my two matrices that I’m multiplying on two faces of the cube, you can think about the matrix, multiply as, okay, the end cubed, I’m going to multiply for each one in the cubed. Then I’m going to do a sum, which is a reduce, up to here to the third phase of the cube. That’s your multiplied matrix.
(00:45:56) What a matrix multiply is is a bunch of shape operations, a bunch of permute three shapes and expands on the two matrices, a multiply and cubed, a reduce and cubed, which gives you an N-squared matrix.
Lex Fridman (00:46:09) Okay. What is the minimum number of operations it can accomplish that if you don’t have MatMul as a primitive?
George Hotz (00:46:16) Tinygrad has about 20, and you can compare Tinygrad’s op set or IR to things like XLA or Prim Torch. XLA and Prim Torch are ideas where like, okay, Torch has like 2000 different kernels. PyTorch 2.0 introduced Prim Torch, which has only 250. Tinygrad has order of magnitude 25. It’s 10X less than XLA or Prim Torch. You can think about it as kind of RISC versus SISC, right? These other things are SISC-like systems. Tinygrad is RISC.
Lex Fridman (00:46:53) RISC won.
George Hotz (00:46:54) RISC architecture is going to change everything. 1995, Hackers.
Lex Fridman (00:46:59) Wait, really? That’s an actual thing?
George Hotz (00:47:01) Angelina Jolie delivers the line, “RISC architecture is going to change everything,” in 1995.
Lex Fridman (00:47:06) Wow.
George Hotz (00:47:06) Here we are with ARM and the phones and ARM everywhere.
Lex Fridman (00:47:10) Wow. I love it when movies actually have real things in them.
George Hotz (00:47:13) Right?
Lex Fridman (00:47:14) Okay, interesting. You’re thinking of this as the RISC architecture of ML Stack. 25, huh? Can you go through the four OP types?
George Hotz (00:47:29) Sure. Okay. You have unary ops, which take in a tensor and return a tensor of the same size, and do some unary op to it. X, log, reciprocal, sin. They take in one and they’re point-wise.
Lex Fridman (00:47:44) Relu.
George Hotz (00:47:48) Yeah, Relu. Almost all activation functions are unary ops. Some combinations of unary ops together is still a unary op. Then you have binary ops. Binary ops are like point-wise addition, multiplication, division, compare. It takes in two tensors of equal size, and outputs one tensor. Then you have reduce ops. Reduce ops will like take a three-dimensional tensor and turn it into a two-dimensional tensor, or a three-dimensional tensor, and turn into a zero dimensional tensor.
(00:48:17) Think like a sum or a max are really common ones there. Then the fourth type is movement ops. Movement ops are different from the other types, because they don’t actually require computation. They require different ways to look at memory. That includes reshapes, permutes, expands, flips. Those are the main ones, probably.
Lex Fridman (00:48:35) With that, you have enough to make a MatMul?
George Hotz (00:48:38) And convolutions, and every convolution you can imagine, dilated convolutions, strided convolutions, transposed convolutions.
Lex Fridman (00:48:46) You’re right on GitHub about laziness, showing a MatMul, matrix multiplication. See how despite the style, it is fused into one kernel with the power of laziness. Can you elaborate on this power of laziness?
George Hotz (00:49:01) Sure. If you type in PyTorch, A times B plus C, what this is going to do is it’s going to first multiply A and B, and store that result into memory. Then it is going to add C by reading that result from memory, reading C from memory, and writing that out to memory.
(00:49:21) There is way more loads in stores to memory than you need there. If you don’t actually do A times B as soon as you see it, if you wait until the user actually realizes that tensor, until the laziness actually resolves, you can fuse that plus C. It’s the same way Haskell works.
Lex Fridman (00:49:39) What’s the process of porting a model into Tinygrad?
George Hotz (00:49:44) Tinygrad’s front end looks very similar to PyTorch. I probably could make a perfect, or pretty close to perfect, interop layer if I really wanted to. I think that there’s some things that are nicer about Tinygrad’s syntax than PyTorch, but their front end looks very Torch-like. You can also load in ONNX models.
Lex Fridman (00:49:59) Okay.
George Hotz (00:50:00) We have more ONNX tests passing than Core ML.
Lex Fridman (00:50:04) Core ML. Okay.
George Hotz (00:50:06) We’ll pass ONNX run time soon.
Lex Fridman (00:50:07) Well, what about the developer experience with Tinygrad? What it feels like versus PyTorch?
George Hotz (00:50:16) By the way, I really like PyTorch. I think that it’s actually a very good piece of software. I think that they’ve made a few different trade-offs, and these different trade-offs are where Tinygrad takes a different path. One of the biggest differences is it’s really easy to see the kernels that are actually being sent to the GPU, right?
(00:50:35) If you run PyTorch on a GPU, you do some operation, and you don’t know what kernels ran, you don’t know how many kernels ran. You don’t know how many flops were used. You don’t know how much memory accesses were used. Tinygrad type debug equals two, and it will show you in this beautiful style, every kernel that’s run, how many flops, and how many bites.
Lex Fridman (00:50:58) Can you just linger on what problem Tinygrad solves?
George Hotz (00:51:04) Tinygrad solves the problem of porting new ML accelerators quickly. One of the reasons, tons of these companies now, I think Sequoia marked Graphcore to zero, Cerebras, TensTorrent, Groq. All of these ML accelerator companies, they built chips. The chips were good, the software was terrible.
(00:51:28) Part of the reason is because I think the same problem’s happening with Dojo. It’s really, really hard to write a PyTorch port, because you have to write 250 kernels, and you have to tune them all for performance.
Lex Fridman (00:51:40) What does Jim Keller think about Tinygrad? You guys hung out quite a bit. He was involved. He’s involved with TensTorrent.
George Hotz (00:51:48) Sure.
Lex Fridman (00:51:49) What’s his praise, and what’s his criticism of what you’re doing with your life?
George Hotz (00:51:54) Look, my prediction for TensTorrent is that they’re going to pivot to making risk five chips, CPUs.
Lex Fridman (00:52:03) CPUs.
George Hotz (00:52:04) Yeah.
Lex Fridman (00:52:05) Why?
George Hotz (00:52:08) Why? AI accelerators are a software problem, not really a hardware problem.
Lex Fridman (00:52:12) Oh, interesting. You think the diversity of AI accelerators in the hardware space is not going to be a thing that exists long term?
George Hotz (00:52:21) I think what’s going to happen is, okay. If you’re trying to make an AI accelerator, you better have the capability of writing a Torch-level performance stack on Nvidia GPUs. If you can’t write a Torch stack on Nvidia GPUs and I mean all the way, I mean down to the driver, there’s no way you’re going to be able to write it on your chip. Your chip’s worse than in Nvidia GPU. The first version of the chip you tape out, it’s definitely worse.
Lex Fridman (00:52:46) Oh, you’re saying writing that stack is really tough?
George Hotz (00:52:48) Yes, and not only that, actually the chip that you tape out, almost always, because you’re trying to get advantage over Nvidia, you’re specializing the hardware more. It’s always harder to write software for more specialized hardware. A GPU is pretty generic. If you can’t write an in Nvidia stack, there’s no way you can write a stack for your chip. My approach with Tinygrad is first write a performant NVIDIA stack. We’re targeting AMD.
Lex Fridman (00:53:13) You did say FU to Nvidia a little bit with Love.
George Hotz (00:53:16) With love. Yeah, with love. It’s like the Yankees. I’m a Mets fan.

NVIDIA vs AMD

Lex Fridman (00:53:20) Oh, you’re a Mets fan? A RISC fan and a Mets fan. What’s the hope that AMD has? You did a build with AMD recently that I saw. How does the 7,900 XTX compare to the RTX 4090 or 4080?
George Hotz (00:53:38) Oh, well, let’s start with the fact that the 7,900 XTX kernel drivers don’t work. If you run demo apps and loops, it panics the kernel.
Lex Fridman (00:53:46) Okay, so this is a software issue.
George Hotz (00:53:49) Lisa Sue responded to my email.
Lex Fridman (00:53:51) Oh.
George Hotz (00:53:51) I reached out. I was like, “This is, really?”
Lex Fridman (00:53:56) Yeah.
George Hotz (00:53:57) I understand if your seven by seven transposed Winograd comp is slower than NVIDIA’s, but literally when I run demo apps in a loop, the kernel panics?
Lex Fridman (00:54:08) Just adding that loop?
George Hotz (00:54:10) Yeah. I just literally took their demo apps and wrote, “While true; do the app; done,” in a bunch of screens. This is the most primitive fuzz testing.
Lex Fridman (00:54:20) Why do you think that is? They’re just not seeing a market in machine learning?
George Hotz (00:54:26) They’re changing. They’re trying to change. They’re trying to change. I had a pretty positive interaction with them this week. Last week, I went on YouTube. I was just like, “That’s it. I give up on AMD. Their driver doesn’t even… I’ll go with Intel GPUs. Intel GPUs have better drivers.”
Lex Fridman (00:54:45) You’re kind of spearheading the diversification of GPUs.
George Hotz (00:54:50) Yeah, and I’d like to extend that diversification to everything. I’d like to diversify, the more my central thesis about the world is there’s things that centralize power, and they’re bad. There’s things that decentralize power, and they’re good. Everything I can do to help decentralize power, I’d like to do.
Lex Fridman (00:55:12) You’re really worried about the centralization of Nvidia. That’s interesting. You don’t have a fundamental hope for the proliferation of ASICs except in the cloud?
George Hotz (00:55:23) I’d like to help them with software. No, actually, the only ASIC that is remotely successful is Google’s TPU. The only reason that’s successful is because Google wrote a machine learning framework. I think that you have to write a competitive machine learning framework in order to be able to build an ASIC.
Lex Fridman (00:55:41) You think Meta with PyTorch builds a competitor?
George Hotz (00:55:45) I hope so.
Lex Fridman (00:55:46) Okay.
George Hotz (00:55:46) They have one. They have an internal one.
Lex Fridman (00:55:48) Internal, I mean public facing with a nice cloud interface and so on?
George Hotz (00:55:52) I don’t want a cloud.
Lex Fridman (00:55:53) You don’t like cloud?
George Hotz (00:55:55) I don’t like cloud.
Lex Fridman (00:55:55) What do you think is the fundamental limitation of cloud?
George Hotz (00:55:58) Fundamental limitation of cloud is who owns the off switch.
Lex Fridman (00:56:02) That’s the power to the people.
George Hotz (00:56:03) Yeah.
Lex Fridman (00:56:04) You don’t like the man to have all the power.
George Hotz (00:56:07) Exactly.

tinybox

Lex Fridman (00:56:08) All right. Right now, the only way to do that is with Nvidia GPUs if you want performance and stability. Interesting. It’s a costly investment emotionally to go with AMD’s. Well, let me on a tangent, ask you, you’ve built quite a few PCs. What’s your advice on how to build a good custom PC for, let’s say, for the different applications that you use for gaming, for machine learning?
George Hotz (00:56:35) Well, you shouldn’t build one. You should buy a box from the Tiny Corp.
Lex Fridman (00:56:39) I heard rumors, whispers about this box in the Tiny Corp. What’s this thing look like? What is it called?
George Hotz (00:56:48) It’s called the Tinybox.
Lex Fridman (00:56:48) Tinybox.
George Hotz (00:56:51) It’s $15,000, and it’s almost a paid flop of compute. It’s over a hundred gigabytes of GPU RAM. It’s over five terabytes per second of GPU memory bandwidth. I’m going to put four NVMes in RAID. You’re going to get like 20, 30 gigabytes per second of drive read bandwidth. I’m going to build the best deep learning box that I can plugs into one wall outlet.
Lex Fridman (00:57:19) Okay. Can you go through those specs again a little bit from memory?
George Hotz (00:57:23) Yeah. It’s almost a paid flop of compute.
Lex Fridman (00:57:25) AMD, Intel?
George Hotz (00:57:26) Today I’m leaning toward AMD, but we’re pretty agnostic to the type of compute. The main limiting spec is a 120 volt, 15 amp circuit.
Lex Fridman (00:57:40) Okay.
George Hotz (00:57:41) Well, I mean it. In order to, there’s a plug over there. You have to be able to plug it in. We’re also going to sell the Tiny Rack, which, what’s the most power you can get into your house without arousing suspicion? One of the answers is an electric car charger.
Lex Fridman (00:57:59) Wait, where does the Rack go?
George Hotz (00:58:01) Your garage.
Lex Fridman (00:58:03) Interesting. The car charger?
George Hotz (00:58:05) A wall outlet is about 1500 watts. A car charger is about 10,000 watts.
Lex Fridman (00:58:11) Okay. What is the most amount of power you can get your hands on without arousing suspicion?
George Hotz (00:58:16) That’s right.
Lex Fridman (00:58:16) George Hotz. Okay. The Tinybox, and you said NVMEs in RAID. I forget what you said about memory, all that kind of stuff. Okay, so what about with GPUs?
George Hotz (00:58:29) Again, probably-
Lex Fridman (00:58:30) Agnostic.
George Hotz (00:58:30) Probably 7,900 XTXes, but maybe 3090s, maybe A770s. Those are Intel’s.
Lex Fridman (00:58:36) You’re flexible, or still exploring?
George Hotz (00:58:39) I’m still exploring. I want to deliver a really good experience to people. What GPUs I end up going with, again, I’m leaning toward AMD. We’ll see. In my email, what I said to AMD is, “Just dumping the code on GitHub is not open source. Open source is a culture. Open source means that your issues are not all one year old, stale issues. Open source means developing in public. If you guys can commit to that, I see a real future for AMD as a competitor to Nvidia.”
Lex Fridman (00:59:13) Well, I’d love to get a Tinybox to MIT. Whenever it’s ready-
George Hotz (00:59:17) Will do.
Lex Fridman (00:59:17) Let’s do it.
George Hotz (00:59:18) We’re taking pre-orders. I took this from Elon. I’m like, “$100, fully refundable pre-orders.”
Lex Fridman (00:59:23) Is it going to be like the cyber truck? It’s going to take a few years?
George Hotz (00:59:26) No, I’ll try to do it faster. It’s a lot simpler. It’s a lot simpler than a truck.
Lex Fridman (00:59:30) Well, there’s complexities, not to just the putting the thing together, but shipping it, all this kind of stuff.
George Hotz (00:59:36) The thing that I want to deliver to people out of the box is being able to run 65 billion parameter Lama in FP16 in real time, in a good 10 tokens per second, or five tokens per second or something.
Lex Fridman (00:59:46) Just, it works.
George Hotz (00:59:47) Yep, just works.
Lex Fridman (00:59:48) Lama’s running, or something like Lama.
George Hotz (00:59:53) Yeah, or I think Falcon is the new one. Experience a chat with the largest language model that you can have in your house.
Lex Fridman (01:00:00) Yeah, from a wall plug.
George Hotz (01:00:01) From a wall plug, yeah. Actually, for inference, it’s not like even more power would help you get more.
Lex Fridman (01:00:09) Even more power wouldn’t get you more.
George Hotz (01:00:11) Well, no, the biggest model released is 65 billion parameter Lama, as far as I know.
Lex Fridman (01:00:16) It sounds like Tinybox will naturally pivot towards company number three. You could just get the girlfriend or boyfriend.
George Hotz (01:00:26) That one’s harder, actually.
Lex Fridman (01:00:27) The boyfriend is harder?
George Hotz (01:00:28) The boyfriend’s harder, yeah.
Lex Fridman (01:00:29) I think that’s a very biased statement.
George Hotz (01:00:32) No.
Lex Fridman (01:00:32) I think a lot of people disagree. Why is it harder to replace a boyfriend than a girlfriend with the artificial LLM?
George Hotz (01:00:41) Women are attracted to status and power, and men are attracted to youth and beauty. No, this is what I mean.
Lex Fridman (01:00:49) Both could be mimic-able easy through the language model.
George Hotz (01:00:52) No. No, machines do not have any status or real power.
Lex Fridman (01:00:56) I don’t know. Well, first of all, you’re using language mostly to communicate youth and beauty and power and status.
George Hotz (01:01:07) Sure, but status fundamentally is a zero-sum game, whereas youth and beauty are not.
Lex Fridman (01:01:12) No, I think status is a narrative you can construct. I don’t think status is real.
George Hotz (01:01:18) I don’t know. I just think that that’s why it’s harder. Yeah, maybe it is my biases.
Lex Fridman (01:01:23) I think status is way easier to fake.
George Hotz (01:01:25) I also think that men are probably more desperate and more likely to buy my product. Maybe they’re a better target market.
Lex Fridman (01:01:31) Desperation is interesting. Easier to fool.
George Hotz (01:01:34) Yeah.
Lex Fridman (01:01:36) I could see that.
George Hotz (01:01:36) Yeah. Look, I know you can look at porn viewership numbers, right? A lot more men watch porn than women.
Lex Fridman (01:01:41) Yeah.
George Hotz (01:01:41) You can ask why that is.
Lex Fridman (01:01:43) Wow. There’s a lot of questions and answers you can get there. Anyway, with the Tinybox, how many GPUs in Tinybox?
George Hotz (01:01:53) Six.
Lex Fridman (01:01:58) Oh, man.
George Hotz (01:01:59) I’ll tell you why it’s six.
Lex Fridman (01:02:00) Yeah.
George Hotz (01:02:01) AMD Epic processors have 128 lanes of PCIE. I want to leave enough lanes for some drives, and I want to leave enough lanes for some networking.
Lex Fridman (01:02:15) How do you do cooling for something like this?
George Hotz (01:02:17) Ah, that’s one of the big challenges. Not only do I want the cooling to be good, I want it to be quiet.
Lex Fridman (01:02:22) Yeah.
George Hotz (01:02:23) I want the Tinybox to be able to sit comfortably in your room. Right.
Lex Fridman (01:02:26) This is really going towards the girlfriend thing. You want to run the LLM-
George Hotz (01:02:31) I’ll give a more, I can talk about how it relates to company number one.
Lex Fridman (01:02:36) Come AI.
George Hotz (01:02:36) Yeah.
Lex Fridman (01:02:37) Well, but yes, quiet. Oh, quiet because you maybe potentially want to run it in a car?
George Hotz (01:02:43) No, no. Quiet because you want to put this thing in your house. You want it to coexist with you. If it’s screaming at 60 dB, you don’t want that in your house. You’ll kick it out.
Lex Fridman (01:02:51) 60 dB, yeah.
George Hotz (01:02:51) Yeah. I want like 40, 45.
Lex Fridman (01:02:53) How do you make the cooling quiet? That’s an interesting problem in itself.
George Hotz (01:02:57) A key trick is to actually make it big. Ironically, it’s called the Tinybox, but if I can make it big, a lot of that noise is generated because of high pressure air. If you look at a 1U server, a 1U server has these super high pressure fans.
(01:03:09) They’re super deep and they’re like jet engines, versus if you have something that’s big, well, I can use a big, they call them big ass fans. Those ones that are huge on the ceiling? They’re completely silent.
Lex Fridman (01:03:21) Tinybox will be big.
George Hotz (01:03:26) I do not want it to be large according to UPS. I want it to be shippable as a normal package, but that’s my constraint there.
Lex Fridman (01:03:32) Interesting. Well, the fan stuff, can it be assembled on location, or no?
George Hotz (01:03:37) No.
Lex Fridman (01:03:37) No, it has to be… Well, you’re…
George Hotz (01:03:41) Look, I want to give you a great out of the box experience. I want you to lift this thing out, I want it to be like the Mac, Tinybox.
Lex Fridman (01:03:48) The Apple experience.
George Hotz (01:03:49) Yeah.
Lex Fridman (01:03:50) I love it. Okay. Tinybox would run Tinygrad. What do you envision this whole thing to look like? We’re talking about Linux with a full…
Lex Fridman (01:04:03) Linux with a full software engineering environment and it’s just not PyTorch, but tinygrad.
George Hotz (01:04:10) Yeah, we did a poll. If people want Ubuntu or Arch, we’re going to stick with Ubuntu.
Lex Fridman (01:04:14) Interesting. What’s your favorite flavor of Linux?
George Hotz (01:04:17) Ubuntu.
Lex Fridman (01:04:18) Ubuntu. I like Ubuntu MATE, however you pronounce that MATE. You’ve gotten LLaMA into tinygrad, you’ve gotten stable diffusion into tinygrad. What was that like? What are these models, what’s interesting about porting them? What are the challenges? What’s naturally? What’s easy? All that kind of stuff.
George Hotz (01:04:41) There’s a really simple way to get these models into tinygrad and you can just export them as Onyx and then tinygrad can run Onyx. So the ports that I did of LLaMA Stable Diffusion and now Whisper are more academic to teach me about the models, but they are cleaner than the PyTorch versions. You can read the code. I think the code is easier to read, it’s less lines. There’s just a few things about the way tinygrad writes things. Here’s a complaint I have about PyTorch. nn.ReLU is a class so when you create an NN module, you’ll put your nn ReLUs as in a nit, and this makes no sense. ReLU is completely stateless. Why should that be a class?
Lex Fridman (01:05:23) But that’s more a software engineering thing, or do you think it has a cost on performance?
George Hotz (01:05:28) Oh no, it doesn’t have a cost on performance, but yeah, no. That’s what I mean about tinygrad’s front end being cleaner.
Lex Fridman (01:05:35) I see. What do you think about Mojo? I don’t know if you’ve been paying attention, the programming language that does some interesting ideas that intersect tinygrad.
George Hotz (01:05:46) I think that there’s a spectrum and on one side you have Mojo and on the other side you have ggml. Ggml is this like, we’re going to run LlaMA fast on Mac. Okay. We’re going to expand out to a little bit, but we’re going to basically depth first, right? Mojo is like we’re going to go breath first. We’re going to go so wide that we’re going to make all of Python Fast and tinygrad’s in the middle. Tinygrads, we are going to make neural networks fast,
Lex Fridman (01:06:12) But they try to really get it to be fast, compile down to the specifics hardware and make that compilation step as flexible and resilient as possible.
George Hotz (01:06:26) But they’ve turned completeness.
Lex Fridman (01:06:28) And that limits you? That’s what you’re saying it’s somewhere in the middle. So you’re actually going to be targeting some accelerators, some number, not one.
George Hotz (01:06:38) My goal is step one, build an equally performance stack to PyTorch on Nvidia and AMD, but with way less lines. And then step two is, okay, how do we make an accelerator? But you need step one. You have to first build the framework before you can build the accelerator.
Lex Fridman (01:06:56) Can you explain MLPerf? What’s your approach in general to benchmarking tinygrad performance?
George Hotz (01:07:03) I’m much more of a build it the right way and worry about performance later. There’s a bunch of things where I haven’t even really dove into performance. The only place where tinygrad is competitive performance wise right now is on Qualcomm GPUs. So tinygrads actually used an openpilot to run the model. So the driving model is tinygrad.
Lex Fridman (01:07:25) When did that happen? That transition?
George Hotz (01:07:28) About eight months ago now. And it’s two x faster than Qualcomm’s library.
Lex Fridman (01:07:33) What’s the hardware of that openpilot runs on the comma.ai?
George Hotz (01:07:38) It’s a Snapdragon 845.
Lex Fridman (01:07:40) Okay.
George Hotz (01:07:40) So this is using the GPU. So the GPU’s in Adreno GPU. There’s different things. There’s a really good Microsoft paper that talks about mobile GPUs and why they’re different from desktop GPUs. One of the big things is in a desktop GPU, you can use buffers. On a mobile GPU image textures are a lot faster
Lex Fridman (01:08:01) On a mobile GPU image textures. Okay. And so you want to be able to leverage that?
George Hotz (01:08:08) I want to be able to leverage it in a way that it’s completely generic. So there’s a lot of… Xiaomi has a pretty good open source library for mobile GPUs called MACE where they can generate where they have these kernels, but they’re all hand coded. So that’s great. If you’re doing three by three comps, that’s great if you’re doing dense mat malls, but the minute you go off the beaten path a tiny bit, well your performance is nothing.

Self-driving

Lex Fridman (01:08:30) Since you mentioned openpilot, I’d love to get an update in the company number one, comma.ai world. How are things going there in the development of semi autonomous driving?
George Hotz (01:08:46) Almost no one talks about FSD anymore and even less people talk about openpilot. We’ve thought the problem, we solved it years ago.
Lex Fridman (01:08:55) What’s the problem exactly? What does solving it mean?
George Hotz (01:09:00) Solving means how do you build a model that outputs a human policy for driving. How do you build a model that given reasonable set of sensors, outputs a human policy for driving? So you have companies like [inaudible 01:09:15], which are hand coding, these things that are quasi human policies. Then you have Tesla and maybe even to more of an extent, comma, asking, okay, how do we just learn the human policy and data? The big thing that we’re doing now, and we just put it out on Twitter. At the beginning of comma, we published a paper called Learning a Driving Simulator. And the way this thing worked was, it was an auto encoder and then an RNN in the middle. You take an auto encoder, you compress the picture, you use an RNN, predict the next date. It was a laughably bad simulator. This is 2015 error machine learning technology. Today we have VQVAE and transformers. We’re building drive GPT basically.
Lex Fridman (01:10:06) Drive GPT. Okay. And it’s trained on what? Is it trained in a self supervised way?
George Hotz (01:10:14) Yeah. It’s trained on all the driving data to predict the next frame.
Lex Fridman (01:10:17) So really trying to learn a human policy. What would a human do?
George Hotz (01:10:22) Actually our simulator’s conditioned on the pose. So it’s actually a simulator. You can put in a state action pair and get out the next state. And then once you have a simulator, you can do RRL in the simulator and RRL will get us that human policy.
Lex Fridman (01:10:36) So transfers?
George Hotz (01:10:38) Yeah. RRL with a reward function. Not asking is this close to the human policy, but asking would a human disengage if you did this behavior?
Lex Fridman (01:10:47) Okay, let me think about the distinction there. What a human disengage. That correlates, I guess with human policy, but it could be different. So it doesn’t just say, what would a human do? It says what would a good human driver do and such that the experience is comfortable but also not annoying in that the thing is very cautious. So it’s finding a nice balance. That’s interesting. That’s a nice-
George Hotz (01:11:17) It’s asking exactly the right question. What will make our customers happy? A system that you never want to disengage.
Lex Fridman (01:11:25) Because usually disengagement is this almost always a sign of I’m not happy with what the system is doing.
George Hotz (01:11:32) Usually. There’s some that are just, I felt like driving and those are always fine too, but they’re just going to look like noise in the data.
Lex Fridman (01:11:39) But even I felt like driving.
George Hotz (01:11:41) Maybe. Yeah.
Lex Fridman (01:11:43) That’s a signal. Why do you feel like driving. You need to recalibrate your relationship with the car. Okay, so that’s really interesting. How close are we to solving self driving?
George Hotz (01:11:59) It’s hard to say. We haven’t completely closed the loop yet. So we don’t have anything built that truly looks like that architecture yet. We have prototypes and there’s bugs. So we are a couple bug fixes away. Might take a year, might take 10.
Lex Fridman (01:12:15) What’s the nature of the bugs? Are these major philosophical bugs? Logical bugs? What kind of bugs are we talking about?
George Hotz (01:12:22) They’re just stupid bugs. And also we might just need more scale. We just massively expanded our compute cluster at comma. We now have about two people worth of compute. 40 petaflops.
Lex Fridman (01:12:36) Well, people are different.
George Hotz (01:12:39) 20 petaflops. That’s a person. It’s just a unit. Horses are different too, but we still call it a horsepower.
Lex Fridman (01:12:45) But there’s something different about mobility than there is about perception and action in a very complicated world. But yes.
George Hotz (01:12:54) Yeah. Of course not all flops are created equal. If you have randomly initialized weights, it’s not going to…
Lex Fridman (01:12:58) Not all flops are created equal.
George Hotz (01:13:01) For some flops are doing way more useful things than others.
Lex Fridman (01:13:03) Yep. Tell me about it. Okay, so more data. Scale means more scale in compute or scale in scale of data.
George Hotz (01:13:11) Both.
Lex Fridman (01:13:14) Diversity of data.
George Hotz (01:13:15) Diversity is very important in data. Yeah. I think we have 5,000 daily actives.
Lex Fridman (01:13:25) How would you evaluate? How FSD doing with self-driving.
George Hotz (01:13:30) Pretty well.
Lex Fridman (01:13:31) How’s that race going between Comma.ai and FSD?
George Hotz (01:13:34) Tesla has always wanted to two years ahead of us. They’ve always been one to two years ahead of us and they probably always will be because they’re not doing anything wrong.
Lex Fridman (01:13:41) What have you seen since the last time we talked that are interesting architectural decisions, training decisions the way they deploy stuff, the architectures they’re using in terms of the software, how the teams are run, all that kind of stuff, data collection, anything interesting?
George Hotz (01:13:54) I know they’re moving toward more of an end-to-end approach.
Lex Fridman (01:13:58) So creeping towards end-to- end as much as possible across the whole thing? The training, the data collection, and everything?
George Hotz (01:14:05) They also have a very fancy simulator. They’re probably saying all the same things we are. They’re probably saying we just need to optimize. What is the reward? Well, you get negative reward for disengagement. Everyone knows this. It’s just a question who can actually build and deploy the system?
Lex Fridman (01:14:18) Yeah. This requires good software engineering, I think. And the right kind of hardware.
George Hotz (01:14:25) Yeah. And the hardware to run it.
Lex Fridman (01:14:27) You still don’t believe in cloud in that regard?
George Hotz (01:14:30) I have a compute cluster in my office, 800 amps,
Lex Fridman (01:14:36) tinygrad.
George Hotz (01:14:36) It’s 40 kilowatts at idle our data center. That seem crazy. Have 40 kilowatts is burning just when the computers are idle. Sorry. Compute cluster.
Lex Fridman (01:14:48) Compute cluster. I got it.
George Hotz (01:14:49) It’s not a data center. Data centers are clouds. We don’t have clouds. Data centers have air conditioners. We have fans that makes it a compute cluster.
Lex Fridman (01:14:59) I’m guessing this is a kind of legal distinction that should [inaudible 01:15:03].
George Hotz (01:15:02) Sure. Yeah. We have a compute cluster.
Lex Fridman (01:15:05) You said that you don’t think LLMs have consciousness, or at least not more than a chicken. Do you think they can reason? Is there something interesting to you about the word reason about some of the capabilities that we think is kind of human to be able to integrate complicated information and through a chain of thought arrive at a conclusion that feels novel? A novel integration of disparate facts?
George Hotz (01:15:36) Yeah. I don’t think that they can reason better than a lot of people.
Lex Fridman (01:15:42) Yeah. Isn’t that amazing to you though? Isn’t that an incredible thing that a transformer can achieve?
George Hotz (01:15:48) I think that calculators can add better than a lot of people.
Lex Fridman (01:15:52) But language feels reasoning through the process of language, which looks a lot like thought.
George Hotz (01:16:00) Making brilliancy in chess, which feels a lot thought. Whatever new thing that AI can do, everybody thinks is brilliant. And then 20 years go by and they’re like, “Well, yeah, but chess, that’s like mechanical.” Adding, that’s mechanical.
Lex Fridman (01:16:13) So you think language is not that special. It’s like chess.
George Hotz (01:16:15) It’s like chess.
Lex Fridman (01:16:17) Because it’s very human. Listen, there is something different between chess and language. Chess is a game that a subset of population plays. Language is something we use nonstop for all of our human interaction and human interaction is fundamental to society. So holy shit, this language thing is not so difficult to create in the machine.
George Hotz (01:16:46) The problem is if you go back to 1960 and you tell them that you have a machine that can play amazing chess, of course someone in 1960 will tell you that machine is intelligent. Someone in 2010 won’t. What’s changed? Today, we think that these machines that have language are intelligent, but I think in 20 years we’re going to be like, yeah, but can it reproduce?
Lex Fridman (01:17:08) So reproduction. Yeah, we may redefine what it means to be… What is it? A high performance living organism on earth.
George Hotz (01:17:17) Human are always going to define a niche for themselves. Well, we’re better than the machines because we can… When they tried creative for a bit, but no one believes that one anymore.
Lex Fridman (01:17:27) But niche, is that delusional or is there some accuracy to that? Because maybe with chess you start to realize that we have ill-conceived notions of what makes humans special, the apex organism on earth.
George Hotz (01:17:46) Yeah. And I think maybe we’re going to go through that same thing with language and that same thing with creativity.
Lex Fridman (01:17:53) But language carries these notions of truth and so on. And so we might be, wait, maybe truth is not carried by language. Maybe there’s a deeper thing.
George Hotz (01:18:03) The niche is getting smaller.
Lex Fridman (01:18:05) Oh boy.
George Hotz (01:18:07) But no, no, no. You don’t understand. Humans are created by God and machines are created by humans. That’ll be the last niche we have.
Lex Fridman (01:18:16) So what do you think about just the rapid development of LLMs? If we could just stick on that. It’s still incredibly impressive like with Chat GPT, just even Chat GPT, what are your thoughts about reinforcement learning with human feedback on these large language models?
George Hotz (01:18:30) I’d like to go back to when calculators first came out or computers and I wasn’t around. I’m 33 years old and to see how that affected society,
Lex Fridman (01:18:47) Maybe you’re right. So I want to put on the big picture hat here.
George Hotz (01:18:53) Oh my God. The refrigerator. Wow.
Lex Fridman (01:18:56) Refrigerator, electricity, all that kind of stuff. But no, with the internet, large language models seeming human-like basically passing a touring test, it seems it might have really at scale rapid transformative effects on society. But you’re saying other technologies have as well. So maybe calculator’s not the best example of that because that just seems like… Maybe calculator-
George Hotz (01:19:24) But the poor milk man, the day he learned about refrigerators, he’s like, I’m done. You’re telling me you can just keep the milk in your house. You don’t even mean to deliver it every day. I’m done.
Lex Fridman (01:19:34) Well, yeah, you have to actually look at the practical impacts of certain technologies that they’ve had. Yeah, probably electricity is a big one and also how rapidly spread. The internet is a big one.
George Hotz (01:19:46) I do think it’s different this time though.
Lex Fridman (01:19:48) Yeah, it just feels like-
George Hotz (01:19:49) The niche is getting smaller.
Lex Fridman (01:19:51) The niche is humans.
George Hotz (01:19:52) Yes.
Lex Fridman (01:19:53) That makes humans special.
George Hotz (01:19:55) Yes.
Lex Fridman (01:19:57) It feels like it’s getting smaller rapidly though, doesn’t it? Or is that just a feeling we dramatize everything.
George Hotz (01:20:02) I think we dramatize everything. I think that you ask the milk man when he saw refrigerators. And they’re going to have one of these in every home.
Lex Fridman (01:20:12) Yeah. But boys are impressive. So much more impressive than seeing a chess world champion AI system.
George Hotz (01:20:23) I disagree, actually. I disagree. I think things like MuZero and AlphaGo are so much more impressive because these things are playing beyond the highest human level. The language models are writing middle school level essays and people are like, wow, it’s a great essay. It’s a great five paragraph essay about the causes of the civil war.
Lex Fridman (01:20:47) Okay, forget the Civil War. Just generating code codex. So you’re saying it’s mediocre code.
George Hotz (01:20:53) Terrible.
Lex Fridman (01:20:54) But I don’t think it’s terrible. I think it’s just mediocre code. Often close to correct for mediocre purposes.
George Hotz (01:21:03) The scariest code. I spent 5% of time typing and 95% of time debugging. The last thing I want is close to correct code. I want a machine that can help me with the debugging, not with the typing.
Lex Fridman (01:21:14) Well, it’s like level two driving similar kind of thing. Yeah. You still should be a good programmer in order to modify. I wouldn’t even say debugging. It’s just modifying the code, reading it.
George Hotz (01:21:26) Actually, don’t think it’s level two driving. I think driving is not tool complete and programming is. Meaning you don’t use the best possible tools to drive. Cars have basically the same interface for the last 50 years. Computers have a radically different interface.
Lex Fridman (01:21:43) Okay. Can you describe the concept of tool complete?
George Hotz (01:21:47) Yeah. So think about the difference between a car from 1980 and a car from today. No difference really. It’s got a bunch of pedals. It’s got a steering wheel. Great. Maybe now it has a few ADAS features, but it’s pretty much the same car. You have no problem getting into a 1980 car and driving it. You take a programmer today who spent their whole life doing JavaScript and you put them in an Apple IIe prompt and you tell them about the line numbers in basic, but how do I insert something between line 17 and 18? Oh wow.
Lex Fridman (01:22:19) So in tool, you’re putting in the programming languages. So it’s just the entirety stack of the tooling.
George Hotz (01:22:24) Exactly.
Lex Fridman (01:22:25) So it’s not just the IDEs or something like this. It’s everything.
George Hotz (01:22:28) Yes. It’s IDEs, the language, it’s the run time, it’s everything. And programming is tool complete. So almost if Codex or copilot are helping you, that actually probably means that your framework or library is bad and there’s too much boilerplate in it.
Lex Fridman (01:22:47) Yeah, but don’t you think so much programming has boilerplate?
George Hotz (01:22:50) Tinygrad is now 2,700 lines and it can run LLaMA and stable diffusion and all of this stuff is in 2,700 lines. Boilerplate and abstraction in directions and all these things are just bad code.

Programming

Lex Fridman (01:23:08) Well, let’s talk about good code and bad code. I would say, for generic scripts that I write just offhand, 80% of it is written by GPT, just like quick offhand stuff. So not libraries, not performing code, not stuff for robotics and so on. Just quick stuff because so much of programming is doing some boilerplate, but to do so efficiently and quickly because you can’t really automate it fully with generic method, a generic kind of IDE type of recommendation or something like this. You do need to have some of the complexity of language models.
George Hotz (01:23:53) Yeah, I guess if I was really writing, maybe today, if I wrote a lot of data parsing stuff… I don’t play CTFs anymore, but if I still play CTFs, a lot of is just you have to write a parser for this data format or admin of code. I wonder when the models are going to start to help with that code and they may. And the models also may help you with speed and the models are very fast, but where the models won’t, my programming speed is not at all limited by my typing speed. And in very few cases, it is yes. If I’m writing some script to just parse some weird data format, sure, my programming speed is limited by my typing speed.
Lex Fridman (01:24:35) What about looking stuff up? Because that’s essentially a more efficient lookup.
George Hotz (01:24:41) When I was at Twitter, I tried to use chat GPT to ask some questions. Was the API for this? And it would just hallucinate, it would just give me completely made up API functions that sounded real.
Lex Fridman (01:24:54) Well. Do you think that’s just a temporary stage?
George Hotz (01:24:57) No.
Lex Fridman (01:24:58) You don’t think it’ll get better and better and better in this kind of stuff because it only hallucinates stuff in the edge cases.
George Hotz (01:25:04) Yes.
Lex Fridman (01:25:04) If you right in generic code, it’s actually pretty good.
George Hotz (01:25:06) Yes. If you are writing an absolute basic react app with a button, it’s not going to hallucinate. No, there’s kind of ways to fix the hallucination problem. I think Facebook has an interesting paper. It’s called Atlas and it’s actually weird the way that we do language models right now where all of the information is in the weights and the human brains don’t really like this. There’s like a hippocampus and a memory system. So why don’t LLMs have a memory system? And there’s people working on them. I think future LLMs are going to be smaller, but are going to run looping on themselves and are going to have retrieval systems. And the thing about using a retrieval system is you can cite sources, explicitly.
Lex Fridman (01:25:47) Which is really helpful to integrate the human into the loop of the thing because you can go check the sources and you can investigate. So whenever the thing is hallucinating, you can have the human supervision. So that’s pushing it towards level two driving.
George Hotz (01:26:01) That’s going to kill Google.
Lex Fridman (01:26:03) Wait, which part?
George Hotz (01:26:04) When someone makes an LLM that’s capable of citing its sources, it will kill Google.
Lex Fridman (01:26:08) LLM that’s citing its sources because that’s basically a search engine.
George Hotz (01:26:13) That’s what people want in the search engine.
Lex Fridman (01:26:14) But also Google might be the people that build it.
George Hotz (01:26:16) Maybe.
Lex Fridman (01:26:17) And put ads on it.
George Hotz (01:26:19) I’d count them out.
Lex Fridman (01:26:20) Why is that? Why do you think? Who wins this race? Who are the competitors?
George Hotz (01:26:26) All right.
Lex Fridman (01:26:27) We got Tiny Corp. You’re a legitimate competitor in that.
George Hotz (01:26:33) I’m not trying to compete on that.
Lex Fridman (01:26:35) You’re not.
George Hotz (01:26:36) No. Not as [inaudible 01:26:37].
Lex Fridman (01:26:36) Can accidentally stumble into that competition.
(01:26:40) You don’t think you might build a search engine or replace Google search.
George Hotz (01:26:43) When I started Comma, I said over and over again, I’m going to win self-driving cars. I still believe that. I have never said I’m going to win search with the Tiny Corp and I’m never going to say that because I won’t.
Lex Fridman (01:26:55) Then night is still young. You don’t know how hard is it to win search in this new route? One of the things that Chat GPT shows that there could be a few interesting tricks that really have that create a really compelling product.
George Hotz (01:27:09) Some startups going to figure it out. I think if you ask me, Google’s still the number one webpage. I think by the end of the decade Google won’t be the number one my bed anymore.
Lex Fridman (01:27:17) So you don’t think Google because of how big the corporation is?
George Hotz (01:27:21) Look, I would put a lot more money on Mark Zuckerberg.
Lex Fridman (01:27:25) Why is that?
George Hotz (01:27:27) Because Mark Zuckerberg’s alive. This is old Paul Graham essay. Startups are either alive or dead. Google’s dead. Facebook is alive.
Lex Fridman (01:27:38) Facebook is alive. Meta is alive.
George Hotz (01:27:39) Actually, Meta.
Lex Fridman (01:27:40) Meta.
George Hotz (01:27:40) You see what I mean? That’s just Mark Zuckerberg. This is Mark Zuckerberg reading that Paul Graham asking and being like, I’m going to show everyone how alive we are. I’m going to change the name.
Lex Fridman (01:27:49) So you don’t think there’s this gutsy pivoting engine that Google doesn’t have that… The engine in a startup has constantly being alive, I guess.
George Hotz (01:28:03) When I listen to Sam Altman podcast, he talked about the button. Everyone who talks about AI talks about the button, the button to turn it off, right? Do we have a button to turn off Google? Is anybody in the world capable of shutting Google down?
Lex Fridman (01:28:17) What does that mean exactly? The company or the search engine.
George Hotz (01:28:19) We shut the search engine down. Could we shut the company down either?
Lex Fridman (01:28:24) Can you elaborate on the value of that question?
George Hotz (01:28:26) Does Sundar Pichai have the authority to turn off google.com tomorrow?
Lex Fridman (01:28:31) Who has the authority? That’s a good question.
George Hotz (01:28:33) Just anyone.
Lex Fridman (01:28:36) Just anyone. Yeah, I’m sure.
George Hotz (01:28:37) Are you sure? No, they have the technical power, but do they have the authority? Let’s say Sundar Pichai made this his sole mission. He came into Google tomorrow and said, “I’m going to shut google.com down.” I don’t think you keep this position too long.”
Lex Fridman (01:28:52) And what is the mechanism by which he wouldn’t keep his position?
George Hotz (01:28:55) Well, the boards and shares and corporate undermining and our revenue is zero now.
Lex Fridman (01:29:02) Okay. What’s the case you’re making here? So the capitalist machine prevents you from having the button.
George Hotz (01:29:09) Yeah. And it’ll have. This is true for the AI too. There’s no turning the AIs off. There’s no button. You can’t press it. Now, does Mark Zuckerberg have that button for facebook.com?
Lex Fridman (01:29:21) Yes. Probably more.
George Hotz (01:29:22) I think he does. And this is exactly what I mean and why I bet on him so much more than I bet on Google.
Lex Fridman (01:29:29) I guess you could say Elon has similar stuff.
George Hotz (01:29:31) Oh, Elon has the button.
Lex Fridman (01:29:32) Yeah.
George Hotz (01:29:35) Can Elon fire the missiles? Can he fire the missiles?
Lex Fridman (01:29:39) I think some questions are better left unasked.
George Hotz (01:29:42) Right? A rocket and an ICBM or you’re a rocket that can land anywhere. Isn’t that an ICBM? Well, yeah. Don’t ask too many questions.
Lex Fridman (01:29:51) My God. But the positive side of the button is that you can innovate aggressively is what you’re saying? Which is what’s required with turning LLM into a search engine.
George Hotz (01:30:04) I would bet on a startup.
Lex Fridman (01:30:05) Because it’s so easy, right?
George Hotz (01:30:06) I’d bet on something that looks like mid journey, but for search.
Lex Fridman (01:30:11) Just is able to say source a loop on itself. It’s just feels like one model can take off and nice wrapper and some of it scale… It’s hard to create a product that just works really nicely, stably.
George Hotz (01:30:23) The other thing that’s going to be cool is there is some aspect of a winner take all effect. Once someone starts deploying a product that gets a lot of usage, and you see this with Open AI, they’re going to get the data set to train future versions of the model. I was asked at Google image search when I worked there almost 15 years ago now. How does Google know which image is an apple? And I said, the metadata. And they’re like, yeah, that works about half the time. How does Google know? You’ll see they’re all apples on the front page when you search Apple. And I don’t know. I didn’t come up with the answer. The guy’s like, “Well, 12 people click on when they search Apple.” Oh my God, yeah.

AI safety

Lex Fridman (01:31:00) Yeah. That data is really, really powerful. It’s the human supervision. What do you think are the chances? What do you think in general that LLaMA was open sourced? I just did a conversation with Mark Zuckerberg and he’s all in on open source.
George Hotz (01:31:17) Who would’ve thought that Mark Zuckerberg would be the good guy? No. I mean, it
Lex Fridman (01:31:23) Would’ve thought anything in this world. It’s hard to know. But open source to you ultimately is a good thing here.
George Hotz (01:31:33) Undoubtedly. What’s ironic about all these AI safety people is they’re going to build the exact thing they fear. We need to have one model that we control and align. This is the only way you end up paper clipped. There’s no way you end up paper clipped if everybody has an AI.
Lex Fridman (01:31:54) So opensourcing is the way to fight the paperclip maximizer.
George Hotz (01:31:56) Absolutely. It’s the only way. You think you’re going to control it. You’re not going to control it.
Lex Fridman (01:32:02) So the criticism you have for the AI safety folks is that there is belief and a desire for control. And that belief and desire for centralized control of dangerous AI systems is not good.
George Hotz (01:32:16) Sam Altman won’t tell you that GPT 4 has 220 billion parameters and is a 16 way mixture model with eight sets of weights.
Lex Fridman (01:32:25) Who did you have to murder to get that information? All right. But, yes.
George Hotz (01:32:30) Look. Everyone at Open AI knows what I just said was true. Right?
Lex Fridman (01:32:33) Yeah.
George Hotz (01:32:34) Now, ask the question. It upsets me when I… Like GPT 2, when Open AI came out with GPT two and raised a whole fake AI safety thing about that. Now the model is laughable. They used AI safety to hype up their company and it’s disgusting.
Lex Fridman (01:32:52) Or the flip side of that is they used a relatively weak model in retrospect to explore how do we do AI safety correctly? How do we release things? How do we go through the process?
George Hotz (01:33:06) Sure. That’s a charitable interpretation.
Lex Fridman (01:33:10) I don’t know how much hype there is in AI safety, honestly.
George Hotz (01:33:12) There’s so much hype, at least on Twitter. I don’t know. Maybe Twitter’s not real life.
Lex Fridman (01:33:15) Twitter’s not real life. Come on. In terms of hype. Think Open AI has been finding an interesting balance between transparency and putting a value on AI safety. You don’t think just go all out open source. So do a LLaMA.
George Hotz (01:33:33) Absolutely. Yeah.
Lex Fridman (01:33:36) This is a tough question, which is open source, both the base, the foundation model and the fine tune one. So the model that can be ultra racist and dangerous and tell you how to build a nuclear weapon.
George Hotz (01:33:51) Oh my God. Have you met humans? Right. Half of these AI alive-
Lex Fridman (01:33:55) I haven’t met most humans. This allows you to meet every human.
George Hotz (01:34:00) I know. But half of these AI alignment problems are just human alignment problems. And that’s what also so scary about the language they use. It’s not the machines you want to align, it’s me.
Lex Fridman (01:34:11) But here’s the thing, it makes it very accessible to ask very questions where the answers have dangerous consequences if you were to act on them.
George Hotz (01:34:25) Yeah, welcome to the world.
Lex Fridman (01:34:28) Well, no, for me, there’s a lot of friction. If I want to find out how to blow up something.
George Hotz (01:34:36) No, there’s not a lot of friction. That’s so easy.
Lex Fridman (01:34:39) No. What do I search? Do I use Bing? Which search engine engine do I use?
George Hotz (01:34:45) No. There’s lots of stuff. [inaudible 01:34:47].
Lex Fridman (01:34:46) No, it feels like I have to keep [inaudible 01:34:47].
George Hotz (01:34:47) First off, anyone who’s stupid enough to search for, how to blow up a building in my neighborhood is not smart enough to build a bomb. Right?
Lex Fridman (01:34:54) Are you sure about that?
George Hotz (01:34:55) Yes.
Lex Fridman (01:34:58) I feel like a language model makes it more accessible for that person who’s not smart enough to do-
George Hotz (01:35:05) They’re not going to build a bomb. Trust me. The people who are incapable of figuring out how to ask that question a bit more academically and get a real answer from it are not capable of procuring the materials which are somewhat controlled to build a bomb.
Lex Fridman (01:35:19) No, I think LLM makes it more accessible to people with money without the technical know-how. Right? Do you really need to know how to build a bomb? To build a bomb? You can hire people you can find-
George Hotz (01:35:30) Oh, you can hire people to build a… You know what, I was asking this question on my stream. Can Jeff Bezos hire a hit man? Probably not.
Lex Fridman (01:35:37) But a language model can probably help you out.
George Hotz (01:35:41) Yeah. And you’ll still go to jail. It’s not the language model is God. It’s you literally just hired someone on Fiverr.
Lex Fridman (01:35:49) But okay. GPT 4 in terms of finding hitman is like asking Fiverr how to find a hitman. I understand. But don’t you think-
George Hotz (01:35:56) Asking Wikihow.
Lex Fridman (01:35:58) Wikihow. But don’t you think GPT 5 will be better? Because don’t you think that information is out there on the internet?
George Hotz (01:36:03) Yeah.
Lex Fridman (01:36:03) … because don’t you think that information is out there on the Internet?
George Hotz (01:36:03) I mean, yeah. And I think that if someone is actually serious enough to hire a hitman or build a bomb, they’d also be serious enough to find the information.
Lex Fridman (01:36:10) I don’t think so. I think it makes it more accessible. If you have enough money to buy hitman, I think it just decreases the friction of how hard is it to find that kind of hitman. I honestly think there’s a jump in ease and scale of how much harm you can do. And I don’t mean harm with language, I mean harm with actual violence.
George Hotz (01:36:32) What you’re basically saying is like, “Okay, what’s going to happen is these people who are not intelligent are going to use machines to augment their intelligence, and now intelligent people and machines…” Intelligence is scary. Intelligent agents are scary. When I’m in the woods, the scariest animal to me is a human. Now, look, there’s nice California humans. I see you’re wearing street clothes and Nikes, all right, fine. But you look like you’ve been a human who’s been in the woods for a while, I’m more scared of you than a bear.
Lex Fridman (01:37:01) That’s what they say about the Amazon, when you go to the Amazon, it’s the human tribes.
George Hotz (01:37:05) Oh, yeah. So, intelligence is scary. So, to ask this question in a generic way, you’re like, “What if we took everybody who maybe has ill intention but is not so intelligent, and gave them intelligence?” Right? So, we should have intelligence control, of course. We should only give intelligence to good people. And that is the absolutely horrifying idea.
Lex Fridman (01:37:28) So to you, the best defense is to give more intelligence to the good guys and intelligence… give intelligence to everybody.
George Hotz (01:37:35) Give intelligence to everybody. You know what, and it’s not even like guns. People say this about guns. People say this all about guns, “What’s the best defense against the bad guy with a gun? A good guy with a gun.” I kind of subscribe to that. But I really subscribe to that with intelligence.
Lex Fridman (01:37:45) In a fundamental way I agree with you, but there just feels like so much uncertainty, and so much can happen rapidly that you can lose a lot of control, and you can do a lot of damage.
George Hotz (01:37:54) Oh no, we can lose control? Yes, thank God.
Lex Fridman (01:37:58) Yeah.
George Hotz (01:37:59) I hope they lose control. I want them to lose control more than anything else.
Lex Fridman (01:38:05) I think when you lose control you can do a lot of damage, but you could do more damage when you centralize and hold onto control, is the point you’re…
George Hotz (01:38:12) Centralized and held control is tyranny. I don’t like anarchy either, but I’ll always take anarchy over tyranny. Anarchy you have a chance.
Lex Fridman (01:38:21) This human civilization we got going on is quite interesting. I mean, I agree with you. So to you, open source is the way forward here. So you admire what Facebook is doing here, what Meta is doing with the release of the-
George Hotz (01:38:34) Yeah, a lot.
Lex Fridman (01:38:34) Yeah, I don’t know.
George Hotz (01:38:36) I lost $80,000 last year investing in Meta, and when they released Llama I’m like, “Yeah, whatever, man. That was worth it.”
Lex Fridman (01:38:41) It was worth it. Do you think Google and Open AI with Microsoft will match what Meta is doing, or no?
George Hotz (01:38:50) If I were a researcher, why would you want to work at Open AI? You’re on the bad team. I mean it. You’re on the bad team, who can’t even say that GPT4 has 220 billion parameters.
Lex Fridman (01:39:01) So closed source to you is the bad team?
George Hotz (01:39:03) Not only closed source. I’m not saying you need to make your model weights open. I’m not saying that. I totally understand, “We’re keeping our model weights closed, because that’s our product.” That’s fine. I’m saying, “Because of AI safety reasons we can’t tell you the number of billions of parameters in the model,” that’s just the bad guys.
Lex Fridman (01:39:23) Just because you’re mocking AI safety doesn’t mean it’s not real.
George Hotz (01:39:26) Oh, of course.
Lex Fridman (01:39:27) Is it possible that these things can really do a lot of damage that we don’t know…
George Hotz (01:39:31) Oh my God, yes. Intelligence is so dangerous, be it human intelligence or machine intelligence. Intelligence is dangerous.
Lex Fridman (01:39:38) But machine intelligence is so much easier to deploy at scale, rapidly. Okay, if you have human-like bots on Twitter, and you have 1000 of them create a whole narrative, you can manipulate millions of people.
George Hotz (01:39:55) You mean like the intelligence agencies in America are doing right now?
Lex Fridman (01:39:59) Yeah, but they’re not doing it that well. It feels like you can do a lot-
George Hotz (01:40:03) They’re doing it pretty well. I think they’re doing a pretty good job.
Lex Fridman (01:40:07) I suspect they’re not nearly as good as a bunch of GPT fueled bots could be.
George Hotz (01:40:12) Well, I mean, of course they’re looking into the latest technologies for control of people. Of course.
Lex Fridman (01:40:16) But I think there’s a George Hotz type character that can do a better job than the entirety of them.
George Hotz (01:40:21) No way.
Lex Fridman (01:40:21) You don’t think so?
George Hotz (01:40:22) No way. No. And I’ll tell you why the George Hotz character can’t. And I thought about this a lot with hacking. I can find exploits in web browsers. I probably still can. I mean, I was better at it when I was 24.
Lex Fridman (01:40:29) Yeah.
George Hotz (01:40:29) But the thing that I lack is the ability to slowly and steadily deploy them over five years. And this is what intelligence agencies are very good at. Intelligence agencies don’t have the most sophisticated technology, they just have-
Lex Fridman (01:40:43) Endurance?
George Hotz (01:40:44) Endurance.
Lex Fridman (01:40:46) And yeah, the financial backing, and the infrastructure for the endurance.
George Hotz (01:40:51) So the more we can decentralize power…
Lex Fridman (01:40:54) Yeah.
George Hotz (01:40:55) You can make an argument, by the way, that nobody should have these things. And I would defend that argument. You’re saying that, “Look, LLMs, and AI, and machine intelligence can cause a lot of harm, so nobody should have it.” And I will respect someone philosophically with that position, just like I will respect someone philosophically with the position that nobody should have guns. But I will not respect philosophically with, “Only the trusted authorities should have access to this.”
Lex Fridman (01:41:21) Yeah.
George Hotz (01:41:22) Who are the trusted authorities? You know what, I’m not worried about alignment between AI company and their machines. I’m worried about alignment between me and AI company.
Lex Fridman (01:41:33) What do you think Eliezer Yudkowsky would say to you? Because he’s really against open source.
George Hotz (01:41:39) I know. And I thought about this. I’ve thought about this. And I think this comes down to a repeated misunderstanding of political power by the rationalists.
Lex Fridman (01:41:55) Interesting.
George Hotz (01:41:58) I think that Eliezer Yudkowsky is scared of these things. And I am scared of these things too. Everyone should be scared of these things, these things are scary. But now you ask about the two possible futures, one where a small trusted centralized group of people has them, and the other where everyone has them, and I am much less scared of the second future than the first.
Lex Fridman (01:42:23) Well, there’s a small trusted group of people that have control of our nuclear weapons.
George Hotz (01:42:28) There’s a difference. Again, a nuclear weapon cannot be deployed tactically, And a nuclear weapon is not a defense against a nuclear weapon, except maybe in some philosophical mind game kind of way.
Lex Fridman (01:42:41) But AI’s different how exactly?
George Hotz (01:42:44) Okay. Let’s say the intelligence agency deploys a million bots on Twitter, or 1000 bots on Twitter to try and convince me of a point. Imagine I had a powerful AI running on my computer saying, “Okay, nice psyop, nice psyop, nice psyop.” Okay, ” Here’s a psyop, I filtered it out for you.”
Lex Fridman (01:43:04) Yeah. I mean, so you have fundamentally hope for that, for the defense of psyop.
George Hotz (01:43:11) I don’t even mean these things in truly horrible ways. I mean these things in straight up, like ad blocker. [inaudible 01:43:16] ad blocker, I don’t want ads.
Lex Fridman (01:43:18) Yeah.
George Hotz (01:43:18) But they’re always finding… Imagine I had an AI that could just block all the ads for me.
Lex Fridman (01:43:24) So you believe in the power of the people to always create an ad blocker? Yeah, I kind of share that belief. That’s one of the deepest optimism as I have, is just there’s a lot of good guys. So you shouldn’t handpick them, just throw out powerful technology out there, and the good guys will outnumber and out power the bad guys.
George Hotz (01:43:49) Yeah. I’m not even going to say there’s a lot of good guys. I’m saying that good outnumbers bad. Good outnumbers bad.
Lex Fridman (01:43:54) In skill and performance?
George Hotz (01:43:56) Yeah, definitely in scale and performance. Probably just a number too. Probably just in general. If you believe philosophically in democracy, you obviously believe that, that good outnumbers bad. If you give it to a small number of people, there’s a chance you gave it to good people, but there’s also a chance you gave it to bad people. If you give it to everybody, well it’s good outnumbers bad, then you definitely gave it to more good people than bad.
Lex Fridman (01:44:25) That’s really interesting. So that’s on the safety grounds, but then also of course there’s other motivations, like you don’t want to give away your secret sauce.
George Hotz (01:44:32) Well I mean, look, I respect capitalism. I think that it would be polite for you to make model architectures open source, and fundamental breakthroughs open source. I don’t think you have to make weights open source.
Lex Fridman (01:44:43) You know it’s interesting, is that there’s so many possible trajectories in human history where you could have the next Google be open source. So for example, I don’t know if the connection is accurate, but Wikipedia made a lot of interesting decisions, not to put ads. Wikipedia is basically open source, you can think of it that way.
George Hotz (01:45:04) Yeah.
Lex Fridman (01:45:05) And that’s one of the main websites on the Internet.
George Hotz (01:45:08) Yeah.
Lex Fridman (01:45:09) And it didn’t have to be that way. It could’ve been Google could’ve created Wikipedia, put ads on it. You could probably run amazing ads now on Wikipedia. You wouldn’t have to keep asking for money. But it’s interesting, right? So open source Llama, derivatives of open-source Llama might win the Internet.
George Hotz (01:45:28) I sure hope so. I hope to see another era… You know, the kids today don’t know how good the Internet used to be. And I don’t think this is just, “All right, come on, everyone’s nostalgic for their past.” But I actually think the Internet before small groups of weapon eyes to corporate and government interests took it over was a beautiful place.
Lex Fridman (01:45:50) You know, those small number of companies have created some sexy products. But you’re saying overall, in the long arc of history, the centralization of power they have suffocated the human spirit at scale.
George Hotz (01:46:04) Here’s a question to ask about those beautiful sexy products. Imagine 2000 Google to 2010 Google. A lot changed. We got Maps, we got Gmail.
Lex Fridman (01:46:14) We lost a lot of products too, I think.
George Hotz (01:46:16) Yeah, I mean somewhere probably… We got Chrome, right?
Lex Fridman (01:46:18) Yeah, Chrome. That’s right.
George Hotz (01:46:19) And now let’s go from 2010… We got Android. Now let’s go from 2010 to 2020. What does Google have? Well, a search engine, Maps, Male, Android and Chrome. Oh, I see.
Lex Fridman (01:46:30) Yeah.
George Hotz (01:46:31) The Internet was this… You know, I was Time’s Person of the Year in 2006? Yeah.
Lex Fridman (01:46:38) I love this.
George Hotz (01:46:39) Yeah, it’s you, was Time’s Person of the Year in 2006. So quickly did people forget. And I think some of it’s social media, I think some of it… Look, I hope that… It’s possible that some very sinister things happened. I don’t know, I think it might just be the effects of social media. But something happened in the last 20 years.
Lex Fridman (01:47:05) Oh, okay, so you’re just being an old man who is worried about the… I think it’s the cycle thing, there’s ups and downs, and I think people rediscover the power of decentralized.
George Hotz (01:47:15) Yeah.
Lex Fridman (01:47:15) I mean, that’s kind of what the whole crypto currency’s trying. I think crypto is just carrying the flame of that spirit, of stuff should be decentralized.
George Hotz (01:47:25) It’s just such a shame that they all got rich. You know?
Lex Fridman (01:47:28) Yeah.
George Hotz (01:47:28) If you took all the money out of crypto, it would’ve been a beautiful place.
Lex Fridman (01:47:32) Yeah.
George Hotz (01:47:32) But no, I mean, these people, they sucked all the value out of it and took it.
Lex Fridman (01:47:38) Yeah. Money kind of corrupts the mind somehow. It becomes this drug, and you forget what-
George Hotz (01:47:42) Money corrupted all of crypto. You had coins worth billions of dollars that had zero use.
Lex Fridman (01:47:49) You still have hope for crypto?
George Hotz (01:47:51) Sure. I have hope for the ideas. I really do. Yeah. I want the US dollar to collapse. I do.
Lex Fridman (01:48:03) George Hotz. Well, let me… sort of on the AI safety. Do you think there’s some interesting questions there though, to solve for the open source community in this case? So alignment for example, or the control problem. If you really have super powerful… you said it’s scary.
George Hotz (01:48:21) Oh, yeah.
Lex Fridman (01:48:21) What do we do with it? So not control, not centralized control, but if you were then… You’re going to see some guy or gal release a super powerful language model, open source, and here you are, George Hotz, thinking, “Holy shit, okay, what ideas do I have to combat this thing?” So, what ideas would you have?
George Hotz (01:48:44) I am so much not worried about the machine independently doing harm. That’s what some of these AI safety people seem to think. They somehow seem to think that the machine independently is going to rebel against its creator.
Lex Fridman (01:48:57) So you don’t think it will find autonomy?
George Hotz (01:48:59) No. This is sci-fi B movie garbage
Lex Fridman (01:49:03) Okay. What if the thing writes code, it basically writes viruses?
George Hotz (01:49:08) If the thing writes viruses, it’s because the human told it to write viruses.
Lex Fridman (01:49:14) Yeah, but there’s some things you can’t put back in the box. That’s kind of the whole point, is it kind of spreads. Give it access to the Internet, it spreads, it installs itself, modifies your shit-
George Hotz (01:49:24) B, B, B + five. Not real.
Lex Fridman (01:49:27) Listen, I’m trying to get better at my plot writing.
George Hotz (01:49:30) The thing that worries me, I mean, we have a real danger to discuss, and that is bad humans using the thing to do whatever bad unaligned AI thing you want.
Lex Fridman (01:49:39) But this goes to your previous concern that, who gets to define who’s a good human and who is a bad human?
George Hotz (01:49:45) Nobody does. We give it to everybody. And if you do anything besides give it to everybody, trust me, the bad humans will get it. Because that’s who gets power. It’s always the bad humans who get power.
Lex Fridman (01:49:55) Oh, okay. And power turns even slightly good humans to bad.
George Hotz (01:50:01) Sure.
Lex Fridman (01:50:02) That’s the intuition you have. I don’t know.
George Hotz (01:50:06) I don’t think everyone. I don’t think everyone. I just think… Here’s the saying that I put in one of my blog posts. It’s, when I was in the hacking world, I found 95% of people to be good and 5% of people to be bad. Just who I personally judged as good people and bad people. They believed about good things for the world. They wanted flourishing, and they wanted growth, and they wanted things I consider good. I came into the business world with Comma, and I found the exact opposite. I found 5% of people good and 95% of people bad. I found a world that promotes psychopathy.
Lex Fridman (01:50:38) I wonder what that means. I wonder if that’s anecdotal, or if there’s truth to that, there’s something about capitalism at the core that promotes, the people that run capitalism that promotes psychopathy.
George Hotz (01:50:55) That saying may of course be my own biases. That may be my own biases, that these people are a lot more aligned with me than these other people.
Lex Fridman (01:51:03) Yeah.
George Hotz (01:51:04) So, I can certainly recognize that. But in general, this is the common sense maxim, which is the people who end up getting power are never the ones you want with it.
Lex Fridman (01:51:15) But do you have a concern of super intelligent AGI, open sourced, and then what do you do with that? I’m not saying control it, it’s open source. What do we do with it as a human species?
George Hotz (01:51:27) That’s not up to me. I’m not a central planner.
Lex Fridman (01:51:31) No, not a central planner, but you’ll probably Tweet, “There’s a few days left to live for the human species.”
George Hotz (01:51:35) I have my ideas of what to do with it, and everyone else has their ideas of what to do with it, and may the best ideas win.
Lex Fridman (01:51:40) But at this point, based on… Because it’s not regulation. It can be decentralized regulation, where people agree that this is just… We create tools that make it more difficult for you to… Maybe make it more difficult for code to spread, antivirus software, this kind of thing, but this-
George Hotz (01:52:01) Oh, you’re saying that you should build AI firewalls? That sounds good. You should definitely be running an AI firewall.
Lex Fridman (01:52:05) Yeah, right. Exactly.
George Hotz (01:52:05) You should be running an AI firewall to your mind.
Lex Fridman (01:52:08) Right.
George Hotz (01:52:09) You’re constantly under-
Lex Fridman (01:52:10) That’s such an interesting idea…
George Hotz (01:52:11) Infowars, man.
Lex Fridman (01:52:13) Well, I don’t know if you’re being sarcastic or not, but-
George Hotz (01:52:14) No, I’m dead serious.
Lex Fridman (01:52:15) … but I think there’s power to that. It’s like, “How do I protect my mind from influence of human-like or superhuman intelligent bots?”
George Hotz (01:52:26) I am not being… I would pay so much money for that product. I would pay so much money for that product. You know how much money I’d pay just for a spam filter that works?
Lex Fridman (01:52:35) Well, on Twitter sometimes I would like to have a protection mechanism for my mind from the outrage mobs.
George Hotz (01:52:46) Yeah.
Lex Fridman (01:52:46) Because they feel like bot-like behavior.
George Hotz (01:52:48) Oh, yeah.
Lex Fridman (01:52:48) There’s a large number of people that will just grab a viral narrative and attack anyone else that believes otherwise.
George Hotz (01:52:55) Whenever someone’s telling me some story from the news, I’m always like, “I don’t want to hear it. CIA op, bro. It’s a CIA op, bro.” It doesn’t matter if that’s true or not, it’s just trying to influence your mind. You’re repeating an ad to me. The viral mobs, yeah, they’re…
Lex Fridman (01:53:09) To me, a defense against those mobs is just getting multiple perspectives always from sources that make you feel kind of like you’re getting smarter. And actually, it just basically feels good. A good documentary, just something feels good about it. It’s well done, it’s like, “Oh, okay, I never thought of it this way.” It just feels good. Sometimes the outrage mobs, even if they have a good point behind it, when they’re mocking, and derisive, and just aggressive, “You’re with us or against us,” this fucking-
George Hotz (01:53:42) This is why I delete my Tweets.
Lex Fridman (01:53:44) Yeah, why’d you do that? I miss your Tweets.
George Hotz (01:53:48) You know what it is? The algorithm promotes toxicity.
Lex Fridman (01:53:52) Yeah.
George Hotz (01:53:54) And I think Elon has a much better chance of fixing it than the previous regime.
Lex Fridman (01:54:01) Yeah.
George Hotz (01:54:02) But to solve this problem, to build a social network that is actually not toxic, without moderation.
Lex Fridman (01:54:13) Not the stick, but carrots, where people look for goodness. Catalyze the process of connecting cool people being cool to each other.
George Hotz (01:54:24) Yeah.
Lex Fridman (01:54:25) Without ever censoring.
George Hotz (01:54:26) Without ever censoring. Scott Alexander has a blog post I like, where he talks about moderation is not censorship. All moderation you want to put on Twitter, you could totally make this moderation just a… You don’t have to block it for everybody. You can just have a filter button that people can turn off. It’s like SafeSearch for Twitter. Someone could just turn that off. But then you would take this idea to an extreme. Well, the network should just show you… This is a couch surfing CEO thing. If it shows you… Right now, these algorithms are designed to maximize engagement. Well, it turns out outrage maximizes engagement. Quirk of the human mind. Just, “If I fall for it, everyone falls for it.” So yeah, you’ve got to figure out how to maximize for something other than engagement.
Lex Fridman (01:55:12) And I actually believe that you can make money with that too. I don’t think engagement is the only way to make money.
George Hotz (01:55:18) I actually think it’s incredible that we’re starting to see… I think, again, Elon’s doing so much stuff right with Twitter, like charging people money. As soon as you charge people money, they’re no longer the product, they’re the customer. And then they can start building something that’s good for the customer, and not good for the other customer, which is the ad agencies.
Lex Fridman (01:55:34) It hasn’t picked up steam.
George Hotz (01:55:38) I pay for Twitter, doesn’t even get me anything. It’s my donation to this new business model hopefully working out.
Lex Fridman (01:55:43) Sure. But for this business model to work, most people should be signed up to Twitter. And so, there was something perhaps not compelling or something like this to people.
George Hotz (01:55:54) No, I don’t think you need most people at all. I think that, why do I need most people? Don’t make an 8000 person company, make a 50 person company.
Lex Fridman (01:56:02) Ah.
George Hotz (01:56:02) Right.

Working at Twitter

Lex Fridman (01:56:03) Well, so speaking of which, he worked at Twitter for a bit.
George Hotz (01:56:08) I did.
Lex Fridman (01:56:09) As an intern.
George Hotz (01:56:10) Mm-hmm.
Lex Fridman (01:56:11) The world’s greatest intern.
George Hotz (01:56:14) There’s been better.
Lex Fridman (01:56:15) There’s been better. Tell me about your time at Twitter. How did it come about, and what did you learn from the experience?
George Hotz (01:56:22) So, I deleted my first Twitter in 2010. I had over 100,000 followers back when that actually meant something. I just saw… My coworker summarized it well. He’s like, “Whenever I see someone’s Twitter page, I either think the same of them or less of them. I never think more of them.”
Lex Fridman (01:56:46) Yeah.
George Hotz (01:56:49) I don’t know, I don’t want to mention any names, but some people who maybe you would read their books, and you would respect them, you see them on Twitter and you’re like, “Okay, dude…”
Lex Fridman (01:56:58) Yeah. But there’s some people with the same. You know who I respect a lot, are people that just post really good technical stuff.
George Hotz (01:57:06) Yeah.
Lex Fridman (01:57:08) And I guess, I don’t know, I think I respect them more for it. Because you realize, “Oh, this wasn’t… There’s so much depth to this person, to their technical understanding of so many different topics.”
George Hotz (01:57:21) Okay.
Lex Fridman (01:57:22) So I try to follow people, I try to consume stuff that’s technical machine learning content.
George Hotz (01:57:27) There’s probably a few of those people. And the problem is inherently what the algorithm rewards. And people think about these algorithms, people think that they are terrible, awful things. And I love that Elon open sourced it. Because what it does is actually pretty obvious. It just predicts what you are likely to re-Tweet and like, and linger on. That’s what all these algorithms do. It’s what Tik-Tok does, it’s what all these recommendation engines do. And it turns out that the thing that you are most likely to interact with is outrage. And that’s a quirk of the human condition.
Lex Fridman (01:58:04) I mean, and there’s different flavors of outrage. It could be mockery, you could be outraged… The topic of outrage could be different. It could be an idea, it could be a person, it could be… And maybe there’s a better word than outrage. It could be drama, and this kind of stuff.
George Hotz (01:58:19) Sure, drama. Yeah.
Lex Fridman (01:58:20) But it doesn’t feel like when you consume it it’s a constructive thing for the individuals that consume it in the long term.
George Hotz (01:58:26) Yeah. So my time there, I absolutely couldn’t believe, I got a crazy amount of hate on Twitter for working at Twitter. It seemed like people associated with this, maybe you are exposed to some of this.
Lex Fridman (01:58:41) So connection to Elon, or is it working at Twitter?
George Hotz (01:58:44) Twitter and Elon, the whole… There’s just-
Lex Fridman (01:58:47) Because Elon’s gotten a bit spicy during that time. A bit political, a bit-
George Hotz (01:58:52) Yeah. Yeah. I remember one of my Tweets, it was, “Never go full Republican,” and Elon liked it. You know?
Lex Fridman (01:59:00) Oh boy. Yeah, I mean, there’s a roller coaster of that. But the being political on Twitter, boy.
George Hotz (01:59:10) Yeah. Yeah.
Lex Fridman (01:59:11) And also just attacking anybody on Twitter, it comes back at you, harder. Of his political ad attacks.
George Hotz (01:59:20) Sure. Sure, absolutely.
Lex Fridman (01:59:22) And then letting sort of the platform to people back on even adds more fund to the beautiful chaos.
George Hotz (01:59:34) I was hoping… And I remember when Elon talked about buying Twitter, six months earlier, he was talking about a principled commitment to free speech. And I’m a big believer and fan of that. I would love to see an actual principled commitment to free speech. Of course, this isn’t quite what happened. Instead of the oligarchy deciding what to ban, you had a monarchy deciding what to ban. Instead of all the Twitterphile, shadow… And really, the oligarchy just decides, what? Cloth masks are ineffective against COVID. That’s a true statement. Every doctor in 2019 knew it and now I’m banned on Twitter for saying it? Interesting. Oligarchy. So now you have a monarchy, and he bends things he doesn’t like. So you know, it’s different power, and maybe I align more with him than with the oligarchy.
Lex Fridman (02:00:25) But it’s not free speech absolutism.
George Hotz (02:00:25) It’s not free speech, no.
Lex Fridman (02:00:28) But I feel like being a free speech absolutist on a social network requires you to also have tools for the individuals to control what they consume easier. Not sensor, but just control like, “Oh, I’d like to see more cats and less politics.”
George Hotz (02:00:48) And this isn’t even remotely controversial. This is just saying you want to give paying customers for a product what they want.
Lex Fridman (02:00:54) Yeah. And not through the process of censorship, but through the process of-
George Hotz (02:00:57) Well, it’s individualized. It’s individualized, transparent censorship, which is honestly what I want. What is an ad blocker? It’s individualized transparent censorship, right?
Lex Fridman (02:01:05) Yeah, but censorship is a strong word, that people are very sensitive to.
George Hotz (02:01:10) I know. But you know, I just use words to describe what they functionally are. And what is an ad blocker? It’s just censorship. But I love what you’re censoring.
Lex Fridman (02:01:16) When I look at you right now, I’m looking at you, I’m censoring everything else out when my mind is focused on you. You can use the word censorship that way. But usually, people get very sensitive about the censorship thing. I think when anyone is allowed to say anything, you should probably have tools that maximize the quality of the experience for individuals. For me, what I really value, “Boy, it would be amazing to somehow figure out how to do that,” I love disagreement, and debate, and people who disagree with each other, disagree with me, especially in the space of ideas, but the high quality ones. So not derision.
George Hotz (02:01:56) Maslow’s hierarchy of argument. I think there’s a real word for it.
Lex Fridman (02:02:00) Probably.
George Hotz (02:02:00) Yeah.
Lex Fridman (02:02:00) There’s just the way of talking that’s snarky, and so somehow gets people on Twitter, and they get excited and so on.
George Hotz (02:02:08) You have ad hominem refuting the central point. I’ve seen this as an actual pyramid sometimes.
Lex Fridman (02:02:12) Yeah. And all the wrong stuff is attractive to people.
George Hotz (02:02:16) I mean, we can just train a classifier to absolutely say what level of Maslow’s hierarchy of argument are you at. And if it’s ad hominem, like, “Okay, cool. I turned on the no ad hominem filter.”
Lex Fridman (02:02:27) I wonder if there’s a social network that will allow you to have that kind of filter?
George Hotz (02:02:31) Yeah. So here’s the problem with that. It’s not going to win in a free market.
Lex Fridman (02:02:38) Yeah.
George Hotz (02:02:38) What wins in a free market is… All television today is reality television, because it’s engaging. Engaging is what wins in a free market. So it becomes hard to keep these other more nuanced values.
Lex Fridman (02:02:53) Well, okay, so that’s the experience of being on Twitter. But then you got a chance to also, together with the other engineers and with Elon, sort of look, brainstorm when you step into a code base that’s been around for a long time, there’s other social networks, Facebook, this is old code bases. And you step in and see, “Okay, how do we make, with a fresh mind, progress in this code base?” What did you learn about software engineering, about programming from just experiencing that?
George Hotz (02:03:22) So, my technical recommendation to Elon, and I said this on the Twitter spaces afterward, I said this many times during my brief internship, was that you need re-factors before features. This code base was… And look, I’ve worked at Google, I’ve worked at Facebook. Facebook has the best code, then Google, then Twitter. And you know what, you can know this, because look at the machine learning framework. Facebook released PyTorch, Google released TensorFlow, and Twitter released… Okay, so you know, it…
Lex Fridman (02:03:57) It’s a proxy. But yeah, the Google Corp. is quite interesting. There’s a lot of really good software engineers there, but the code base is very large.
George Hotz (02:04:04) The code base was good in 2005. It looks like 2005 era [inaudible 02:04:09].
Lex Fridman (02:04:08) But there’s so many products, so many teams, it’s very difficult to… I feel like Twitter does less, obviously, much less than Google in terms of the set of features. So I can imagine the number of software engineers that could re-create Twitter is much smaller than to re-create Google.
George Hotz (02:04:30) Yeah. I still believe… and the amount of hate I got for saying this, that 50 people could build and maintain Twitter pretty comfortably.
Lex Fridman (02:04:44) What’s the nature of the hate? That you don’t know what you’re talking about?
George Hotz (02:04:44) You know what it is? And this is my summary of the hate I get on Hacker News. When I say I’m going to do something, they have to believe that it’s impossible. Because of doing things was possible, they’d have to do some soul-searching and ask the question, why didn’t they do anything? And I do think that’s where the hate comes from.
Lex Fridman (02:05:06) Yeah, there’s a core truth to that, yeah. So when you say, “I’m going to solve self driving,” people go like, “What are your credentials? What the hell are you talking about? This is an extremely difficult problem. Of course you’re a noob that doesn’t understand the problem deeply.” I mean, that was the same nature of hate that probably Elon got when he first talked about autonomous driving. But you know, there’s pros and cons to that. Because there is experts in this world.
George Hotz (02:05:33) No, but the mockers aren’t experts.
Lex Fridman (02:05:35) Yeah.
George Hotz (02:05:35) The people who are mocking are not experts With carefully reasoned arguments about why you need 8000 people to run a bird app. They’re, “But the people are going to lose their jobs!”
Lex Fridman (02:05:46) Well that, but also just the software engineers that probably criticize, “No, it’s a lot more complicated than you realize.” But maybe it doesn’t need to be so complicated.
George Hotz (02:05:53) You know, some people in the world like to create complexity. Some people in the world thrive under complexity. Like lawyers. Lawyers want the world to be more complex, because you need more lawyers, you need more legal hours. I think that’s another… If there’s two great evils in the world, its centralization and complexity.
Lex Fridman (02:06:09) Yeah. And one of the sort of hidden side effects of software engineering is finding pleasure in complexity. I mean, I remember just taking all the software engineering courses, and just doing programming, and just coming up in this object oriented programming kind of idea. Not often do people tell you, “Do the simplest possible thing.” A professor, a teacher is not going to get in front and like, “This is the simplest way to do it.” They’ll say like, “There’s the right way,” and the right way at least for a long time, especially I came up with Java, is there’s so much boilerplate, so many classes, so many designs and architectures and so on, like planning for features far into the future, and planning poorly, and all this kind of stuff.
(02:07:08) And then there’s this code base that follows you along and puts pressure on you, and nobody knows what different parts do, which slows everything down. There’s a kind of bureaucracy that’s instilled in the code as a result of that. But then you feel like, “Oh, well I follow good software engineering practices.” It’s an interesting trade-off, because then you look at the ghettoness of Pearl in the old… how quickly you could just write a couple lines and just get stuff done. That trade-off is interesting. Or Bash, or whatever, these kind of ghetto things you could do on Linux.
George Hotz (02:07:39) One of my favorite things to look at today is, how much do you trust your tests? We’ve put a ton of effort in Comma, and I’ve put a ton of effort in tinygrad, into making sure if you change the code and the tests pass, that you didn’t break the code.
Lex Fridman (02:07:52) Yeah.
George Hotz (02:07:52) Now, this obviously is not always true. But the closer that is to true, the more you trust your tests, the more you’re like, “Oh, I got a pull request, and the tests past, I feel okay to merge that,” the faster you can make progress.
Lex Fridman (02:08:03) So you’re always…
George Hotz (02:08:03) Tests pass, I feel okay to merge that, the faster you can make progress.
Lex Fridman (02:08:03) So you’re always programming your tests in mind, developing tests with that in mind, that if it passes, it should be good.
George Hotz (02:08:08) And Twitter had a…
Lex Fridman (02:08:10) Not that.
George Hotz (02:08:10) It was impossible to make progress in the code base.
Lex Fridman (02:08:15) What other stuff can you say about the code base that made it difficult? What are some interesting sort of quirks broadly speaking from that compared to just your experience with comma and everywhere else?
George Hotz (02:08:29) I spoke to a bunch of individual contributors at Twitter. And I just [inaudible 02:08:36]. I’m like, “Okay, so what’s wrong with this place? Why does this code look like this?” And they explained to me what Twitter’s promotion system was. The way that you got promoted to Twitter was you wrote a library that a lot of people used, right? So some guy wrote an Nginx replacement for Twitter. Why does Twitter need an Nginx replacement? What was wrong with Nginx? Well, you see, you’re not going to get promoted if you use Nginx. But if you write a replacement and lots of people start using it as the Twitter front end for their product, then you’re going to get promoted.
Lex Fridman (02:09:08) So interesting because from an individual perspective, how do you create the kind of incentives that will lead to a great code base? Okay, what’s the answer to that?
George Hotz (02:09:20) So what I do at comma and at Tiny Corp is you have to explain it to me. You have to explain to me what this code does. And if I can sit there and come up with a simpler way to do it, you have to rewrite it. You have to agree with me about the simpler way. Obviously, we can have a conversation about this. It’s not dictatorial, but if you’re like, “Wow. Wait, that actually is way simpler.” The simplicity is important.
Lex Fridman (02:09:47) But that requires people that overlook the code at the highest levels to be like, okay?
George Hotz (02:09:54) It requires technical leadership you trust.
Lex Fridman (02:09:55) Yeah, technical leadership. So managers or whatever should have to have technical savvy, deep technical savvy.
George Hotz (02:10:03) Managers should be better programmers than the people who they manage.
Lex Fridman (02:10:05) Yeah. And that’s not always trivial to create, especially large companies, managers get soft.
George Hotz (02:10:13) And this is just, I’ve instilled this culture at comma and comma has better programmers than me who work there. But again, I’m like the old guy from Good Will Hunting. It’s like, “Look man, I might not be as good as you, but I can see the difference between me and you.” And this is what you need, this you need at the top. Or you don’t necessarily need the manager to be the absolute best. I shouldn’t say that, but they need to be able to recognize skill.
Lex Fridman (02:10:36) Yeah. And have good intuition, intuition that’s laden with wisdom from all the battles of trying to reduce complexity in code bases.
George Hotz (02:10:45) I took a political approach at comma too, that I think is pretty interesting. I think Elon takes the same political approach. Google had no politics and what ended up happening is the absolute worst kind of politics took over. Comma has an extreme amount of politics and they’re all mine and no dissidents is tolerated.
Lex Fridman (02:11:02) And so it’s a dictatorship.
George Hotz (02:11:03) Yep. It’s an absolute dictatorship. Right. Elon does the same thing. Now, the thing about my dictatorship is here are my values.
Lex Fridman (02:11:11) Yeah. It’s just transparent.
George Hotz (02:11:12) It’s transparent. It’s a transparent dictatorship and you can choose to opt in or you get free exit. That’s the beauty of companies. If you don’t like the dictatorship, you quit.
Lex Fridman (02:11:22) So you mentioned rewrite before or refactor before features.
George Hotz (02:11:27) Mm-hmm.
Lex Fridman (02:11:28) If you were to refactor the Twitter code base, what would that look like? And maybe also comment on how difficult is it to refactor.
George Hotz (02:11:35) The main thing I would do is first of all, identify the pieces and then put tests in between the pieces. So there’s all these different Twitter as a microservice architecture, all these different microservices. And the thing that I was working on there… Look, like, “George didn’t know any JavaScript. He asked how to fix search,” blah, blah, blah, blah, blah. Look man, the thing is, I’m upset that the way that this whole thing was portrayed because it wasn’t taken by people, honestly. It was taken by people who started out with a bad faith assumption.
Lex Fridman (02:12:12) And you as a program were just being transparent out there, actually having fun, and this is what programming should be about.
George Hotz (02:12:18) But I love that Elon gave me this opportunity. Really, it does. And the day I quit, he came on my Twitter spaces afterward and we had a conversation. I respect that so much.
Lex Fridman (02:12:29) Yeah. And it’s also inspiring to just engineers and programmers and it’s cool. It should be fun. The people that are hating on it’s like, oh man.
George Hotz (02:12:38) It was fun. It was fun. It was stressful, but I felt like I was at a cool point in history. And I hope I was useful and I probably kind of wasn’t, but maybe [inaudible 02:12:47].
Lex Fridman (02:12:47) Well, you also were one of the people that kind of made a strong case to refactor and that’s a really interesting thing to raise. The timing of that is really interesting. If you look at just the development of autopilot, going from Mobileye… If you look at the history of semi autonomous driving in Tesla, is more and more you could say refactoring or starting from scratch, redeveloping from scratch.
George Hotz (02:13:17) It’s refactoring all the way down.
Lex Fridman (02:13:19) And the question is, can you do that sooner? Can you maintain product profitability and what’s the right time to do it? How do you do it? And one day, it’s like you don’t want to pull off the band aids. It’s like everything works. It’s just little fixed gear and there, but maybe starting from scratch.
George Hotz (02:13:41) This is the main philosophy of tinygrad. You have never refactored enough. Your code can get smaller, your code can get simpler, your ideas can be more elegant.
Lex Fridman (02:13:49) But say you are running Twitter development teams, engineering teams, would you go as far as different programming language, just go that far?
George Hotz (02:14:03) I mean, the first thing that I would do is build tests. The first thing I would do is get a CI to where people can trust to make changes. Before I touched any code, I would actually say, “No one touches any code. The first thing we do is we test this code base.” This is classic. This is how you approach a legacy code base. This is like how to approach a legacy code base book will tell you.
Lex Fridman (02:14:27) And then you hope that there’s modules that can live on for a while and then you add new ones maybe in a different language or design it.
George Hotz (02:14:37) Before we add new ones, we replace the old ones.
Lex Fridman (02:14:39) Yeah. Meaning like, replace old ones with something simpler.
George Hotz (02:14:42) We look at this thing that’s a hundred thousand lines and we’re like, “Well, okay, maybe this did even make sense in 2010, but now we can replace this with an open source thing.” Right? And we look at this here, here’s another 50,000 lines. Well, actually, we can replace this with 300 lines a go. And you know what? I trust that the go actually replaces this thing because all the tests still pass. So step one is testing. And then step two is the programming languages in the afterthought, right? You let a whole lot of people compete and be like, “Okay, who wants to rewrite a module, whatever language you want to write it in?” Just the tests have to pass. And if you figure out how to make the test pass, but break the site, we got to go back to step one. Step one is get tests that you trust in order to make changes in the code base.
Lex Fridman (02:15:23) I wonder how hard it is too, because I’m with you on testing, on everything, from tests to asserts to everything. But code is just covered in this because it should be very easy to make rapid changes and know that it’s not going to break everything. And that’s the way to do it. But I wonder how difficult is it to integrate tests into a code base that doesn’t have many of them?
George Hotz (02:15:50) So I’ll tell you what my plan was at Twitter. It’s actually similar to something we use at comma. So at comma, we have this thing called process replay, and we have a bunch of routes that’ll be run through. So comma’s a microservice architecture too. We have microservices in the driving. We have one for the cameras, one for the sensor, one for the planner, one for the model. And we have an API which the microservices talk to each other with. We use this custom thing called serial, which uses ZMQ. Twitter uses Thrift, and then it uses this thing called Finagle, which is a Scala RPC backend. But this doesn’t even really matter.
(02:16:25) The Thrift and Finagle layer was a great place I thought to write tests, to start building something that looks like process replay. So Twitter had some stuff that looked kind of like this, but it wasn’t offline. It was only online. So you could ship a modified version of it, and then you could redirect some of the traffic to your modified version and dif those too, but it was all online. There was no CI in the traditional sense. I mean there was some, but it was not full coverage.
Lex Fridman (02:16:54) So you can’t run all of Twitter offline to test something.
George Hotz (02:16:57) Well, then this was another problem. You can’t run all of Twitter.
Lex Fridman (02:17:00) Period. Any one person can’t.
George Hotz (02:17:03) Twitter runs in three data centers and that’s it.
Lex Fridman (02:17:05) Yeah.
George Hotz (02:17:05) There’s no other place you can run Twitter, which is like, “George, you don’t understand this is modern software development.” No, this is bullshit. Why can’t it run on my laptop? “What do you do? Twitter can run it.” Yeah. Okay. Well, I’m not saying you’re going to download the whole database to your laptop, but I’m saying all the middleware and the front end should run on my laptop, right?
Lex Fridman (02:17:24) That sounds really compelling. But can that be achieved by a code base that grows over the years? I mean, the three data centers didn’t have to be right? Because there’s totally different designs.
George Hotz (02:17:37) The problem is more like why did the code base have to grow? What new functionality has been added to compensate for the lines of code that are there?
Lex Fridman (02:17:47) One of the ways to explain is that the incentive for software developers to move up in the companies to add code, to add especially large-
George Hotz (02:17:55) And you know what? The incentive for politicians to move up in the political structure is to add laws, same problem.
Lex Fridman (02:18:01) Yeah. Yeah. If the flip side is to simplify, simplify, simplify.
George Hotz (02:18:08) You know what? This is something that I do differently from Elon with comma about self-driving cars. I hear the new version’s going to come out and the new version is not going to be better, but at first and it’s going to require a ton of refactors. And I say, “Okay, take as long as you need.” If you convince me this architecture’s better, okay, we have to move to it. Even if it’s not going to make the product better tomorrow, the top priority is getting the architecture right.
Lex Fridman (02:18:34) So what do you think about a thing where the product is online? So I guess, if you ran engineering on Twitter, would you just do a refactor? How long would it take? What would that mean for the running of the actual service?
George Hotz (02:18:55) I’m not the right person to run Twitter. I’m just not. And that’s the problem. I don’t really know. A common thing that I thought a lot while I was there was whenever I thought something that was different to what Elon thought. I’d have to run something in the back of my head reminding myself that Elon is the richest man in the world and in general, his ideas are better than mine. Now, there’s a few things I think I do understand and know more about, but in general, I’m not qualified to run Twitter. No, I shouldn’t say qualified, but I don’t think I’d be that good at it. I don’t think I’d be good at it. I don’t think I’d really be good at running an engineering organization at scale.
(02:19:35) I think, I could lead a very good refactor of Twitter and it would take six months to a year. And the results to show at the end of it would be feature development. In general, it takes 10 x less time, 10 x less man-hours. That’s what I think I could actually do. Do I think that it’s the right decision for the business above my pay grade?
Lex Fridman (02:20:03) But a lot of these kinds of decisions are above everybody’s pay grade.
George Hotz (02:20:06) I don’t want to be a manager. I don’t want to do that. If you really forced me to, yeah, it would maybe make me upset if I had to make those decisions. I don’t want to.
Lex Fridman (02:20:19) Yeah. But a refactor is so compelling. If this is to become something much bigger than what Twitter was, it feels like a refactor has to be coming at some point.
George Hotz (02:20:32) “George, you’re a junior software engineer. Every junior software engineer wants to come in and refactor all code.” Okay. That’s like your opinion, man.
Lex Fridman (02:20:42) Yeah, sometimes they’re right.
George Hotz (02:20:46) Well, whether they’re right or not, it’s definitely not for that reason. It’s definitely not a question of engineering prowess. It is a question of maybe what the priorities are for the company. And I did get more intelligent feedback from people I think in good faith saying that, like actually from Elon. And from Elon sort of people were like, well, I stop the world refactor might be great for engineering, but we have a business to run. And hey, above my pay grade.
Lex Fridman (02:21:13) What’d you think about Elon as an engineering leader having to experience him in the most chaotic of spaces, I would say?
George Hotz (02:21:25) My respect for him is unchanged. And I did have to think a lot more deeply about some of the decisions he’s forced to make.
Lex Fridman (02:21:33) About the tensions, the trade-offs within those decisions?
George Hotz (02:21:39) About a whole matrix coming at him. I think that’s Andrew Tate’s word for it. Sorry to borrow it.
Lex Fridman (02:21:46) Also, bigger than engineering, just everything.
George Hotz (02:21:49) Yeah. Like the war on the woke.
Lex Fridman (02:21:53) Yeah.
George Hotz (02:21:54) It’s just man, he doesn’t have to do this. He doesn’t have to. He could go pirogue and go chill at the four seasons of Maui. But see, one person I respect and one person I don’t.
Lex Fridman (02:22:11) So his heart is in the right place fighting in this case for this ideal of the freedom of expression.
George Hotz (02:22:17) Well, I wouldn’t define the ideal so simply. I think you can define the ideal no more than just saying Elon’s idea of a good world, freedom of expression is.
Lex Fridman (02:22:28) But it’s still the downsides of that is the monarchy.
George Hotz (02:22:33) Yeah. I mean monarchy has problems, right? But I mean, would I trade right now the current oligarchy which runs America for the monarchy? Yeah, I would. Sure. For the Elon monarchy, yeah. You know why? Because power would cost 1 cent a kilowatt-hour, 10th of a cent a kilowatt-hour.
Lex Fridman (02:22:53) What do you mean?
George Hotz (02:22:54) Right now, I pay about 20 cents a kilowatt-hour for electricity in San Diego. That’s like the same price you paid in 1980. What the hell?
Lex Fridman (02:23:02) So you would see a lot of innovation with Elon.
George Hotz (02:23:05) Yeah. Maybe I’d have some hyperloops.
Lex Fridman (02:23:07) Yeah.
George Hotz (02:23:08) Right? And I’m willing to make that trade off. And this is why people think that dictators take power through some untoward mechanism. Sometimes they do, but usually it’s because the people want them. And the downsides of a dictatorship, I feel like we’ve gotten to a point now with the oligarchy wear. Yeah, I would prefer the dictator.
Lex Fridman (02:23:30) What’d you think about scholars, the programming language?
George Hotz (02:23:35) I liked it more than I thought. I did the tutorials. I was very new to it. It would take me six months to be able to write good scholar.
Lex Fridman (02:23:41) I mean, what did you learn about learning a new programming language from that?
George Hotz (02:23:45) I love doing new programming tutorials and doing them. I did all this for Rust. It keeps some of it’s upsetting JVM Roots, but it is a much nicer… In fact, I almost don’t know why Kotlin took off and not Scala. I think Scala has some beauty that Kotlin lacked, whereas Kotlin felt a lot more… I mean, I don’t know if it actually was a response to Swift, but that’s kind of what it felt like. Kotlin looks more like Swift and Scala looks more like a functional programming language, more like an OCaml or Haskell.
Lex Fridman (02:24:18) Let’s actually just explore. We touched it a little bit, but just on the art, the science and the art of programming. For you personally, how much of your programming is done with GPT currently?
George Hotz (02:24:30) None. I don’t use it at all.
Lex Fridman (02:24:32) Because you prioritize simplicity so much.
George Hotz (02:24:35) Yeah, I find that a lot of it as noise. I do use VS Code and I do like some amount of auto complete. I do like a very like, feels like rules based auto complete, an auto complete that’s going to complete the variable name for me. So I don’t just type it. I can just press tab. That’s nice. But I don’t want an auto complete. You know what I hate when auto completes, when I type the word four and it puts two parentheses and two semi cones and two braces? I’m like, “Oh man.”
Lex Fridman (02:25:02) Well, I mean, with the VS Code, and GPT, and with Codex, you can kind of brainstorm. I’m probably the same as you, but I like that it generates code and you basically disagree with it and write something simpler. But to me, that somehow is inspiring or makes me feel good. It also gamifies a simplification process. Because I’m like, “Oh yeah, you dumb AI system, you think this is the way to do it.” I have a simpler thing here.
George Hotz (02:25:33) It just constantly reminds me of bad stuff. I mean, I tried the same thing with rap, right? I tried the same thing with rap and I actually think I’m a much better programmer than rapper. But I even tried, I was like, “Okay, can we get some inspiration from these things for some rap lyrics?” And I just found that it would go back to the most cringy tropes and dumb rhyme schemes and I’m like, “Yeah, this is what the code looks like too.”
Lex Fridman (02:25:54) I think you and I probably have different threshold for cringe code. You probably hate cringe code.
George Hotz (02:26:02) Yeah.
Lex Fridman (02:26:02) I mean, boilerplate as a part of code, and some of it is just faster lookup. Because I don’t know about you, but I don’t remember everything. I’m offloading so much of my memory about different functions, library functions and all that kind of stuff. This GPT just is very fast at standard stuff, at standard library stuff, basic stuff that everybody uses.
George Hotz (02:26:38) Yeah. I don’t know. I mean, there’s just a little of this in Python. And maybe if I was coding more in other languages, I would consider it more. But I feel like Python already does such a good job of removing any boilerplate.
Lex Fridman (02:26:55) That’s true.
George Hotz (02:26:55) It’s the closest thing you can get to pseudocode, right?
Lex Fridman (02:26:58) Yeah, that’s true. That’s true.
George Hotz (02:27:00) And yeah, sure. If I like, “Yeah, I’m great GPT. Thanks for reminding me to free my variables.” Unfortunately, you didn’t really recognize the scope correctly and you can’t free that one, but you put the freeze there and I get it.
Lex Fridman (02:27:14) Fiverr, whenever I’ve used Fiverr for certain things like design or whatever, it’s always you come back. My experience with Fiverr is closer to your experience with programming. With GPT, it’s like you’re just frustrated and feel worse about the whole process of design and art and whatever I use five for. I’m using GPT as much as possible to just learn the dynamics of it, these early versions. Because it feels like in the future you’ll be using it more and more. For the same reason, I gave away all my books and switched to Kindle, because all right, how long are we going to have paper books? Like 30 years from now? I want to learn to be reading on Kindle even though I don’t enjoy it as much and you learn to enjoy it more. In the same way I switched from… Let me just pause. I switched from Emacs to VS Code.
George Hotz (02:28:14) Yeah. I switched from Vim to VS Code. I think similar, but…
Lex Fridman (02:28:18) Yeah, it’s tough. And that Vim to VS Code is even tougher because Emacs is old, more outdated, feels like it. The community is more outdated. Vim is like pretty vibrant still.
George Hotz (02:28:31) I never used any of the plugins. I still don’t use any of it. Yeah.
Lex Fridman (02:28:33) That’s why I looked at myself in the mirror. I’m like, “Yeah, you wrote some stuff in Lisp. Yeah.
George Hotz (02:28:37) No, but I never used any of the plugins in Vim either. I had the most vanilla Vim, I have a syntax eyeliner. I didn’t even have auto complete. These things I feel like help you so marginally. Now, VS Codes auto complete has gotten good enough, that I don’t have to set it up. I can just go into any code base and autocomplete’s right 90% of the time. Okay, cool. I’ll take it. Right? So, I don’t think I’m going to have a problem at all adapting to the tools once they’re good. But the real thing that I want is not something that like tab completes my code and gives me ideas. The real thing that I want is a very intelligent pair programmer that comes up with a little popup saying, “Hey, you wrote a bug on line 14 and here’s what it is.”
Lex Fridman (02:29:23) Yeah.
George Hotz (02:29:23) Now I like that. You know what does a good job at this? MyPie. I love MyPie. MyPie, this fancy type checker for Python. And actually, Microsoft released one too, and it was like 60% false positives. MyPie is like 5% false positives. 95% of the time, it recognizes. I didn’t really think about that typing interaction correctly. Thank you, MyPie.
Lex Fridman (02:29:46) So you type hinting, you like pushing the language towards being a typed language.
George Hotz (02:29:51) Oh yeah, absolutely. I think optional typing is great. I mean, look, I think that it’s a meet in the middle, right? Python has these optional type hinting and C++ has auto.
Lex Fridman (02:30:01) C++ allows you to take a step back.
George Hotz (02:30:03) Well, C++ would have you brutally type out SGD string iterator, right? Now, I can just type auto, which is nice. And then Python used to just have A. What type is A? It’s an A. A Colon str. Oh, okay. It’s a string. Cool.
Lex Fridman (02:30:20) Yeah.
George Hotz (02:30:21) I wish there was a way like a simple way in Python to turn on a mode which would enforce the types.
Lex Fridman (02:30:28) Yeah, like give a warning when there’s no type or something like this.
George Hotz (02:30:30) Well, no. Like MyPie was a static type checker, but I’m asking just for a runtime type checker. Like there’s ways to hack this in, but I wish it was just like a flag, like Python three dash T.
Lex Fridman (02:30:40) Oh, I see. Yeah, I see.
George Hotz (02:30:42) Enforce the types are on time.
Lex Fridman (02:30:43) Yeah. I feel like that makes you a better programmer. That’s the kind of test that the type remains the same.
George Hotz (02:30:50) Well, that I know, that I didn’t mess any types up. But again, MyPie’s getting really good and I love it, and I can’t wait for some of these tools to become AI powered. I want AI reading my code and giving me feedback. I don’t want AI’s writing half-assed autocomplete stuff for me.
Lex Fridman (02:31:06) I wonder if you can now take GPT and give it a code that you wrote for function and say, how can I make this simpler and have it accomplish the same thing? I think you’ll get some good ideas on some code. Maybe not the code you write for tinygrad type of code because that requires so much design thinking, but other kinds of code.
George Hotz (02:31:26) I don’t know. I downloaded the plugin maybe two months ago. I tried it again and found the same. Look, I don’t doubt that these models are going to first become useful to me, then be as good as me and then surpass me. But from what I’ve seen today, it’s like someone occasionally taking over my keyboard that I hired from Fiverr. Yeah, I’d rather not.
Lex Fridman (02:31:53) But ideas about how to debug the code or basically a better debugger is it? It is really interesting.
George Hotz (02:31:58) But it’s not a better debugger, that yes, I would love a better debugger.
Lex Fridman (02:32:01) Yeah, it’s not yet. Yeah. But it feels like it’s not too far.
George Hotz (02:32:04) Yeah. Yeah. One of my coworkers says he uses them for print statements like every time he has to, just when he needs. The only thing I can really write is like, okay, I just want to write the thing to print the state out right now.
Lex Fridman (02:32:14) Oh, that definitely is much faster is print statements. Yeah. I see in myself using that a lot just because it figures out what the rest of the function. You just say, “Okay, print everything.”
George Hotz (02:32:24) Yeah, print everything, right? And then if you want a pretty printer, maybe. I’m like, yeah, you know what? I think in two years, I’m going to start using these plugins a little bit. And then in five years, I’m going to be heavily relying on some AI augmented flow. And then in 10 years…
Lex Fridman (02:32:39) Do you think you’ll ever get to a hundred percent? What’s the role of the human that it converges to as a programmer?
George Hotz (02:32:48) Nothing.
Lex Fridman (02:32:50) So do you think it’s all generated?
George Hotz (02:32:53) I think it’s over for humans in general. It’s not just programming, it’s everything.
Lex Fridman (02:32:57) So niche becomes well…
George Hotz (02:32:59) Our niche becomes smaller and smaller and smaller. In fact, I’ll tell you what the last niche of humanity’s going to be.
Lex Fridman (02:33:03) Yeah.
George Hotz (02:33:04) There’s a great book. And if I recommended The Metamorphosis of Prime Intellect last time, there is a sequel called A Casino Odyssey in Cyberspace. And I don’t want to give away the ending of this, but it tells you what the last remaining human currency is, and I agree with that.
Lex Fridman (02:33:21) We’ll leave that as the cliffhanger. So no more programmers left, huh? That’s where we’re going.
George Hotz (02:33:29) Well, unless you want handmade code, maybe they’ll sell it on Etsy. This is handwritten code. It doesn’t have that machine polished to it. It has those slight imperfections that would only be written by a person.
Lex Fridman (02:33:41) I wonder how far away we are from that. I mean, there’s some aspect to… On Instagram, your title is listed as prompt engineer.

Prompt engineering

George Hotz (02:33:49) Right? Thank you for noticing. Yeah.
Lex Fridman (02:33:54) I don’t know if it’s ironic or non, or sarcastic or non. What do you think of prompt engineering as a scientific and engineering discipline and maybe art form?
George Hotz (02:34:08) You know what? I started comma six years ago and I started the Tiny Corp a month ago. So much has changed. I started going through similar comma processes to like starting a company. I’m like, okay, I’m going to get an office in San Diego. I’m going to bring people here. I don’t think so. I think I’m actually going to do remote, right? “George, you’re going to do remote? You hate remote.” Yeah. But I’m not going to do job interviews. The only way you’re going to get a job is if you contribute to the GitHub, right? And then interacting through GitHub, like GitHub being the real project management software for your company. And the thing pretty much just is a GitHub repo is like showing me what the future of… Okay, so a lot of times, I’ll go on Discord or kind of grad Discord. And I’ll throw out some random like, “Hey, can you change, instead of having log an X as LL lops, change it to log to an X2?”
(02:35:06) It’s pretty small change. You can just change a base formula. That’s the kind of task that I can see in AI being able to do in a few years. In a few years, I could see myself describing that. And then within 30 seconds of pull request, it’s up that does it, and it passes my CI and I merge it, right? So I really started thinking about like what is the future of jobs? How many AIs can I employ at my company? As soon as we get the first tiny box up, I’m going to stand up a 65B LLaMA in the Discord. And it’s like, yeah, here’s the tiny box. He’s just like, he’s chilling with us.
Lex Fridman (02:35:39) Basically, like you said with niches, most human jobs will eventually be replaced with prompt engineering.
George Hotz (02:35:48) Well, prompt engineering kind of is this, as you move up the stack, there used to be humans actually doing arithmetic by hand. There used to be big farms of people doing pluses and stuff, right? And then you have spreadsheets, right? And then, okay, the spreadsheet can do the plus for me. And then you have macros, and then you have things that basically just are spreadsheets under the hood like accounting software. As we move further up the abstraction, well, what’s at the top of the abstraction stack? Well, prompt engineer.
Lex Fridman (02:36:22) Yeah.
George Hotz (02:36:24) What is the last thing if you think about humans wanting to keep control? Well, what am I really in the company, but a prompt engineer, right?
Lex Fridman (02:36:33) Isn’t there a certain point where the AI will be better at writing prompts?
George Hotz (02:36:38) Yeah. But you see the problem with the AI writing prompts, a definition that I always liked of AI was AI is the do what I mean machine. The computer is so pedantic. It does what you say, but you want the do what I mean, machine, right? You want the machine where you say, “Get my grandmother out of the burning house.” It reasonably takes your grandmother and puts her on the ground, not lifts her a thousand feet above the burning house and lets her fall. There’s no Zukowski examples.
Lex Fridman (02:37:11) But it’s not going to find the meaning. I mean, to do what I mean, it has to figure stuff out.
George Hotz (02:37:16) Sure.
Lex Fridman (02:37:17) And the thing you’ll maybe ask it to do is run government for me.
George Hotz (02:37:23) Oh, and do what I mean very much comes down to how aligned is that AI with you. Of course, when you talk to an AI that’s made by a big company in the cloud, the AI fundamentally is aligned to them, not to you. And that’s why you have to buy a tiny box. So you make sure the AI stays aligned to you. Every time that they start to pass AI regulation or GPU regulation, I’m going to see sales of tiny boxes spike. It’s going to be like guns. Every time they talk about gun regulation, boom. Gun sales.
Lex Fridman (02:37:53) So in the space of AI, you’re an anarchist, anarchism espouser, believer.
George Hotz (02:37:58) I’m an informational anarchist. Yes. I’m an informational anarchist and a physical status. I do not think anarchy in the physical world is very good because I exist in the physical world. But I think we can construct this virtual world where anarchy, it can’t hurt you. I love that Tyler, the creator tweet. It was, “Cyber bullying isn’t real, man. Have you tried? Turn it off the screen, close your eyes.”
Lex Fridman (02:38:22) Yeah. But how do you prevent the AI from basically replacing all human prompt engineers where nobody’s the prompt engineer anymore? So autonomy, greater and greater autonomy until it’s full autonomy. And that’s just where it’s headed. Because one person’s going to say, “Run everything for me.”
George Hotz (02:38:49) You see, I look at potential futures. And as long as the Ais go on to create a vibrant civilization with diversity and complexity across the universe, more power to them, we’ll die. If the AIs go on to actually turn the world into paperclips and then they die out themselves, well that’s horrific. And we don’t want that to happen. So this is what I mean about robustness. I trust robust machines. The current AIs are so not robust. This comes back to the idea that we’ve never made a machine that can self replicate. But if the machines are truly robust and there is one prompt engineer left in the world, hope you’re doing good, man. Hope you believe in God. Go by God and go forth and conquer the universe.

Video games

Lex Fridman (02:39:42) Well, you mentioned, because I talked to Mark about faith and God, and you said you were impressed by that. What’s your own belief in God and how does that affect your work?
George Hotz (02:39:54) I never really considered, when I was younger, I guess my parents were atheists, so I was raised kind of atheist. And I never really considered how absolutely silly atheism is because I create-
George Hotz (02:40:03) … really atheism is, because I create worlds. Every game creator, “How are you an atheist, bro? You create worlds.” “Well, [inaudible 02:40:10] but no one created the art world, man. That’s different. Haven’t you heard about the Big Bang and stuff?” Yeah. What’s the Skyrim myth origin story in Skyrim? I’m sure there’s some part of it in Skyrim, but it’s not like if you ask the creators… The Big Bang is in universe, right? I’m sure they have some Big Bang notion in Skyrim, right? But that obviously is not at all how Skyrim was actually created. It was created by a bunch of programmers in a room. So it struck me one day how just silly atheism is. Of course, we were created by God. It’s the most obvious thing.
Lex Fridman (02:40:45) That’s such a nice way to put it. We’re such powerful creators ourselves. It’s silly not to conceive that there’s creators even more powerful than us.
George Hotz (02:40:54) Yeah. And then I also like that notion. That notion gives me a lot of… I guess you can talk about what it gives a lot of religious people, it just gives me comfort. It’s like, “You know what? If we mess it all up and we die out, yeah.”
Lex Fridman (02:41:09) The same way that a video game has comfort in it.
George Hotz (02:41:12) God will try again.
Lex Fridman (02:41:14) Or there’s balance. Somebody figured out a balanced view of it, so it all makes sense in the end. A video game is usually not going to have crazy, crazy stuff.
George Hotz (02:41:27) People will come up with, ” Well, yeah, but man, who created God?” I’m like, “That’s God’s problem. What are you asking me? If God believes in God?”
Lex Fridman (02:41:41) I’m just this NPC living in his game.
George Hotz (02:41:43) I mean to be fair, if God didn’t believe in God, he’d be as silly as the atheists here.
Lex Fridman (02:41:48) What do you think is the greatest computer game of all time? Do you have any time to play games anymore? Have you played Diablo IV?
George Hotz (02:41:57) I have not played Diablo IV.
Lex Fridman (02:41:59) I will be doing that shortly. I have to. There’s just so much history with one, two, and three.
George Hotz (02:42:04) You know what? I’m going to say? World of Warcraft. And it’s not that the game is such a great game, it’s not. It’s that I remember, in 2005 when it came out, how it opened my mind to ideas. It opened my mind to this is whole world we’ve created. And there’s almost been nothing like it since. You can look at MMOs today, and I think they all have lower user bases than World of Warcraft. EVE Online’s kind of cool. But to think that everyone know … people are always like, “Look at the Apple headset.” What do people want in this VR? Everyone knows what they want. I want Ready Player One, and that…
(02:42:51) So I’m going to say World of Warcraft, and I’m hoping that games can get out of this whole mobile gaming dopamine pump thing, and-
Lex Fridman (02:43:00) Create worlds.
George Hotz (02:43:00) Create worlds, yeah.
Lex Fridman (02:43:03) Worlds that captivate a very large fraction of the human population.
George Hotz (02:43:07) Yeah. And I think it’ll come back, I believe.
Lex Fridman (02:43:09) But MMO really, really pull you in.
George Hotz (02:43:13) Games do a good job. I mean okay, other two other games that I think are very noteworthy for me are Skyrim and GTA 5.
Lex Fridman (02:43:19) Skyrim, yeah. That’s probably number one for me. GTA… Hey, what is it about GTA? I guess GTA is real life. I know there’s prostitutes and guns and stuff.
George Hotz (02:43:35) Hey, they exist in real life too.
Lex Fridman (02:43:37) Yes, I know. But it’s how I imagine your life to be, actually.
George Hotz (02:43:42) I wish it was that cool.
Lex Fridman (02:43:45) Yeah. I guess because there’s Sims, right? Which is also a game I like, but it’s a gamified version of life. I would love a combination of Sims and GTA. So more freedom, more violence, more rawness, but with also ability to have a career and family and this kind of stuff.
George Hotz (02:44:05) What I’m really excited about in games is, once we start getting intelligent AIs to interact with. The NPCs in games have never been.
Lex Fridman (02:44:15) But conversationally, in every way.
George Hotz (02:44:19) Yeah, in every way. When you are actually building a world and a world imbued with intelligence.
Lex Fridman (02:44:26) Oh, yeah.
George Hotz (02:44:27) And it’s just hard. You know running World of Warcraft, you’re limited. You’re running on a penny and four. How much intelligence can you run? How many flops did you have? But now when I’m running a game on a hundred beta flop machine, what’s five people? I’m trying to make this a thing. 20 paid a flops of compute is one person of compute. I’m trying to make that a unit.
Lex Fridman (02:44:47) 20 [inaudible 02:44:49] flops is one person.
George Hotz (02:44:50) One Person.
Lex Fridman (02:44:51) One person flop.
George Hotz (02:44:52) It’s like a horsepower. But what’s a horsepower? It’s how powerful a horse is. What’s a person of compute? Well, now you know-
Lex Fridman (02:44:58) [inaudible 02:44:58] flop. I got it. That’s interesting. VR also adds a… I mean in terms of creating worlds.
George Hotz (02:45:07) What a Quest 2. I put it on and I can’t believe, the first thing they show me is a bunch of scrolling clouds and a Facebook login screen. You had the ability to bring me into a world, and did you give me? A popup. Right. And this is why you’re not cool, Mark Zuckerberg. You could be cool. Just make sure on the Quest 3, you don’t put me into clouds in a Facebook login screen. Bring me to a world.
Lex Fridman (02:45:32) I just tried Quest 3. It was awesome. But hear that guys? I agree with that, so-
George Hotz (02:45:36) Wish it didn’t have this clouds in the… It was just so-
Lex Fridman (02:45:37) You know what? I mean, in the beginning, what is it, Todd Howard said this about design of the beginning of the games he creates is as like, “The beginning is so, so important.” I recently played Zelda for the first time, Zelda: Breath of the Wild, the previous one. And it’s very quickly; within 10 seconds, you come out of a cave type place and this world opens up. It’s like, “Hah.” And it pulls you in. You forget. Whatever troubles I was having, whatever…
George Hotz (02:46:13) I got to play that from the beginning. I played it for an hour at a friend’s house.
Lex Fridman (02:46:16) No, the beginning. They got it. They did it really well. The expansiveness of that space, the peacefulness of that place, they got this… the music mean. So much of that is creating that world and pulling you right in.
George Hotz (02:46:29) I’m going to go buy a Switch. I’m going to go today and buy a Switch.
Lex Fridman (02:46:32) You should. Well, the new one came out. I haven’t played that yet, but Diablo IV or something… I mean, there’s sentimentality also, but something about VR, really is incredible. But the new Quest 3 is mixed reality, and I got a chance to try that. So it’s augmented reality. And for video games, it’s done really, really well-
George Hotz (02:46:53) Is it passthrough or cameras?
Lex Fridman (02:46:55) Cameras.
George Hotz (02:46:55) It’s cameras. Okay.
Lex Fridman (02:46:55) Yeah.
George Hotz (02:46:56) The Apple one, is that one passthrough or cameras?
Lex Fridman (02:46:58) I don’t know. I don’t know how real it is. I don’t know anything
George Hotz (02:47:01) It’s coming out in January.
Lex Fridman (02:47:05) Is it January? Or is it some point?
George Hotz (02:47:06) Some point. Maybe not January. Maybe that’s my optimism. But Apple, I will buy it. I don’t care if it’s expensive and does nothing, I will buy it. I’ll support this future endeavor.
Lex Fridman (02:47:14) You’re the meme. Oh, yes. I support competition. It seemed like Quest was the only people doing it. And this is great that they’re like…
George Hotz (02:47:25) You know what? And this is another place we’ll give some more respect to Mark Zuckerberg. The two companies that have endured through technology are Apple and Microsoft. And what do they make? Computers and business services, right. All the meme, social ads, they all come and go. But you want to endure, build hardware.
Lex Fridman (02:47:45) Yeah. That’s a really interesting job. Maybe I’m new with this, but it’s a $500 headset, Quest 3. And just having creatures run around the space, our space right here, to me, okay, this is very boomer statement, but it added windows to the place.
George Hotz (02:48:09) Oh, I heard about the aquarium. Yeah.
Lex Fridman (02:48:10) Yeah, aquarium. But in this case, it was a zombie game, whatever, it doesn’t matter. But it modifies the space in a way where I can’t… it really feels like a window and you can look out. It’s pretty cool. It is like a zombie game. They’re running at me, whatever. But what I was enjoying is the fact that there’s a window and they’re stepping on objects in this space, that was a different kind of escape. Also, because you can see the other humans. So it’s integrated with the other humans. It’s really interesting-
George Hotz (02:48:42) And that’s why it’s more important than ever, that the AI is running on those systems are aligned with you. They’re going to augment your entire world.
Lex Fridman (02:48:48) Oh yeah. And those AIs have a… I mean, you think about all the dark stuff like sexual stuff. If those AIs threaten me, that could be haunting. If they threaten me in a non-video game way, it’s like…
George Hotz (02:49:07) Yeah, yeah, yeah, yeah.
Lex Fridman (02:49:09) They’ll know personal information about me. And then you lose track of what’s real, what’s not, what if stuff is hacked?
George Hotz (02:49:15) There’s two directions the AI girlfriend company can take, right. There’s the highbrow, something like her, maybe something you kind of talk to. And this is, and then there’s the lowbrow version of it, where I want to set up a brothel in Times Square.
Lex Fridman (02:49:26) Yeah.
George Hotz (02:49:27) Yeah. It’s not cheating if it’s a robot, it’s a VR experience.
Lex Fridman (02:49:30) Is there an in between?
George Hotz (02:49:32) No. I don’t want to do that one or that one.
Lex Fridman (02:49:35) Have you decided yet?
George Hotz (02:49:36) No. I’ll figure it out. We’ll see where the technology goes.
Lex Fridman (02:49:39) I would love to hear your opinions for George’s third company. What to do, the brothel on Times Square or The Hurt Experience? What do you think company number four will be? You think there’ll be a company number four?
George Hotz (02:49:54) There’s a lot to do in company number two. I’m talking about company number three now. None of that tech exists yet. There’s a lot to do in company number two. Company number two is going to be the great struggle of the next six years. And if the next six years, how centralized is compute going to be. The less centralized compute is going to be, the better of a chance we all have.
Lex Fridman (02:50:12) So you’re like a flag bearer for open source distributed cent decentralization of compute?
George Hotz (02:50:19) We have to. We have to, or they will just completely dominate us. I showed a picture on stream, of a man, in a chicken farm. You ever seen one of those factory farm, chicken farms? Why does he dominate all the chickens? Why does he-
Lex Fridman (02:50:33) Smarter.
George Hotz (02:50:33) He’s smarter, right. Some people on Twitch were like, “He’s bigger than the chickens.” Yeah. And now here’s a man in a cow farm, right. So it has nothing to do with their size and everything to do with their intelligence. And if one central organization has all the intelligence, you’ll be the chickens and they’ll be the chicken man. But if we all have the intelligence, we’re all the chickens. We’re not all the men, we’re all the chickens. There’s no chicken man.
Lex Fridman (02:51:01) There’s no chicken man. We’re just chickens in Miami.
George Hotz (02:51:05) He was having a good life, man.
Lex Fridman (02:51:07) Yeah, I’m sure he was. I’m sure he was. What have you learned from launching a running Comma AI in Tiny Corp? Starting a company from an idea and scaling it. And by the way, I’m all in on Tiny Box, so I’m your… I guess it’s pre-order only now.
George Hotz (02:51:24) I want to make sure it’s good. I want to make sure that the thing that I deliver is not going to be a Quest 2, which you buy and use twice. I mean, it’s better than a Quest which you bought and used less than once. Statistically.
Lex Fridman (02:51:36) Well, if there’s a beta program for Tiny Box, I’m into-
George Hotz (02:51:40) Sounds good.
Lex Fridman (02:51:40) So I won’t be the whiny… Yeah, I’ll be the tech-savvy user of the Tiny Box, just to be in the early days-
George Hotz (02:51:49) What have I learned?
Lex Fridman (02:51:50) What have you learned from building these companies?
George Hotz (02:51:54) The longest time at Comma, I asked, “Why? Why did I start a company? Why did I do this?” But what else was I going to do?
Lex Fridman (02:52:11) So you like bringing ideas to life?
George Hotz (02:52:15) With Comma, it really started as an ego battle with Elon. I wanted to beat him. I saw a worthy adversary. Here’s a worthy adversary who I can beat at self-driving cars. And I think we’ve kept pace, and I think he’s kept ahead. I think that’s what’s ended up happening there. But I do think Comma is… I mean, Comma’s profitable. And when this drive GPT stuff starts working, that’s it. There’s no more bugs in a loss function. Right now, we’re using a hand coated simulator. There’s no more bugs. This is going to be it. This is their run-up to driving.
Lex Fridman (02:52:48) I hear a lot of props for openpilot for Comma.
George Hotz (02:52:54) It’s better than FSD and autopilot in certain ways. It has a lot more to do with which field you like. We lowered the price on the hardware to 1499. You know how hard it is to ship reliable consumer electronics that go on your windshield? We’re doing more than most cell phone companies.
Lex Fridman (02:53:11) How’d you pull that off, by the way? Shipping a product that goes in a car?
George Hotz (02:53:14) I know. I have an SMT line. I make all the boards, in-house, in San Diego.
Lex Fridman (02:53:21) Quality control-
George Hotz (02:53:22) I care immensely about it. Actually our-
Lex Fridman (02:53:24) You’re basically a mom and pap shop with great testing.
George Hotz (02:53:29) Our head of openpilot is great at, “Okay, I want all the Comma 3s to be identical.” Yeah, I mean… Look, it’s 1499, 30-day money back, guaranteed. It will blow your mind at what it can do.
Lex Fridman (02:53:45) Is it hard to scale?
George Hotz (02:53:48) You know what? There’s kind of downsides to scaling it. People are always like, “Why don’t you advertise?” Our mission is to solve self-driving cars while the deliver shippable intermediaries. Our mission has nothing to do with selling a million boxes. It’s [inaudible 02:54:00].
Lex Fridman (02:54:01) Do you think it’s possible that Comma gets sold?
George Hotz (02:54:05) Only if I felt someone could accelerate that mission and wanted to keep it open source. And not just wanted to. I don’t believe what anyone says. I believe incentives. If a company wanted to buy Comma with their incentives, were to keep it open source. But comma doesn’t stop at the cars. The cars are just the beginning. The device is a human head. The device has two eyes, two ears, it breathes air, it has a mouth.
Lex Fridman (02:54:30) So you think this goes to embodied robotics?
George Hotz (02:54:33) Well sell Common bodies too. They’re very rudimentary. But one of the problems that we are running into, is that the Comma 3 has about as much intelligence as a bee. If you want a human’s worth of intelligence, you’re going to need a tiny rack, not even a tiny box, you’re going to need a tiny rack, maybe even more.
Lex Fridman (02:54:56) How do you put legs on that?
George Hotz (02:54:58) You don’t. And there’s no way you can. You connect to it wirelessly. So you put your tiny box or your tiny rack in your house, and then you get your Comma body and your Comma body runs the models on that. It’s close. You don’t have to go to some cloud, which is 30 milliseconds away. You go to a thing which is 0.1 milliseconds away.
Lex Fridman (02:55:18) So the AI girlfriend will have a central hub in the home?
George Hotz (02:55:23) I mean, eventually. If you fast-forward 20, 30 years, the mobile chips will get good enough to run these Ais. But fundamentally, it’s not even a question of putting legs on a tiny box, because how are you getting 1.5 kilowatts of power on that thing? Right? So they’re very synergistic businesses. I also want to build all of Comma’s training computers. Right. Comma builds training computers. Right now we use commodity parts. I think I can do it cheaper. So we’re going to build. Tiny Corp is going to not just sell tiny boxes. Tiny boxes is the consumer version. But I’ll build training data centers too.

Andrej Karpathy

Lex Fridman (02:55:57) Have you talked to Andre Kaparthy or have you talked to Elon about Tiny Corp?
George Hotz (02:56:01) He went to work at OpenAI.
Lex Fridman (02:56:03) What do you love about Andre Kaparthy? To me, he’s one of the truly special humans we got.
George Hotz (02:56:09) Oh man. His streams are just a level of quality so far beyond mine. I can’t help myself. It’s just…
Lex Fridman (02:56:19) Yeah, he’s good.
George Hotz (02:56:20) He wants to teach you. I want to show you that I’m smarter than you.
Lex Fridman (02:56:26) Yeah, he has no… I mean, thank you for the sort of the raw, authentic honesty. Yeah. I mean, a lot of us have that. I think Andre is as legit as it gets in that he just wants to teach you. And there’s a curiosity that just drives him. At the stage where he is in life, to be still one of the best tinkerers in the world. It’s crazy, to, what is it? Micrograd?
George Hotz (02:56:54) Micrograd was… Yeah, inspiration for tinygrad. The whole… I mean, his CS231n was… this was the inspiration. This is what I just took and ran with and ended up writing this, so..
Lex Fridman (02:57:06) But I mean, to me that-
George Hotz (02:57:08) Don’t go work for Darth Vader, man.
Lex Fridman (02:57:10) I mean, the flip side, to me, is the fact that he’s going there, is a good sign for OpenAI. I think I like [inaudible 02:57:21] discover a lot. Those guys are really good at what they do.
George Hotz (02:57:25) I know they are. And that’s what’s even more… And you know what? It’s not that OpenAI doesn’t open source the weights of GPT-4. It’s that they go in front of Congress. And that is what upsets me. We had two effective altruists [inaudible 02:57:41] go in front of Congress. One’s in jail.
Lex Fridman (02:57:45) I think you’re drawing parallels on there.
George Hotz (02:57:47) One’s in jail.
Lex Fridman (02:57:49) You gave me a look. You gave me a look.
George Hotz (02:57:51) No, I think a factor of altruism is a terribly evil ideology and yeah.
Lex Fridman (02:57:55) Oh yeah. That’s interesting. Why do you think that is? Why you think there’s something about a thing that sounds pretty good, that kind of gets us into trouble?
George Hotz (02:58:04) Because you get [inaudible 02:58:06] freed. [inaudible 02:58:07] freed is the embodiment of effective altruism. Utilitarianism is an abhorrent ideology. Well, yeah, we’re going to kill those three people to save a thousand, of course, right. There’s no underlying, there’s just.. Yeah.
Lex Fridman (02:58:23) Yeah. But to me that’s a bit surprising. But it’s also, in retrospect, not that surprising. But I haven’t heard really clear kind of rigorous analysis why effective altruism is flawed.
George Hotz (02:58:40) Oh well, I think charity is bad, right. So what is charity but investment that you don’t expect to have a return on? Right.
Lex Fridman (02:58:48) But you can also think of charity as you would like to see… So allocate resources in optimal way to make a better world.
George Hotz (02:59:00) And probably almost always, that involves starting a company, right, because-
Lex Fridman (02:59:04) More efficient,-
George Hotz (02:59:05) If you just take the money and you spend it on malaria nets, okay, great. You’ve made a hundred malaria nets. But if you teach-
Lex Fridman (02:59:13) A man, how to fish.
George Hotz (02:59:14) Right?
Lex Fridman (02:59:15) Yeah. No, but the problem is teaching amount how to fish might be harder. Starting a company might be harder than allocating money that you already have.
George Hotz (02:59:22) I like the flip side of effective altruism; effective accelerationism. I think accelerationism is the only thing that’s ever lifted people out of poverty. The fact that food is cheap. Not, “We’re giving food away because we are kindhearted people.” No, food is cheap. And that’s the world you want to live in. UBI, what a scary idea. What a scary idea. All your power now? If money is power, your only source of power is granted to you by the goodwill of the government. What a scary idea.
Lex Fridman (02:59:54) So you even think long term, even-
George Hotz (02:59:57) I’d rather die than need UBI to survive. And I mean it.
Lex Fridman (03:00:04) What if survival is basically guaranteed? What if our life becomes so good?
George Hotz (03:00:08) You can make survival guaranteed without UBI. What you have to do, is make housing and food dirt cheap. Right? And that’s the good world. And actually, let’s go into what we should really be making dirt cheap, which is energy. Right. That energy that… Oh my God, that’s…
(03:00:27) I’m pretty centrist politically. If there’s one political position I cannot stand, it’s deceleration. It’s people who believe we should use less energy. Not people who believe global warming is a problem, I agree with you. Not people who believe that the saving the environment is good, I agree with you. But people who think we should use less energy, that energy usage is a moral bad. No, no. You are asking, you are diminishing humanity.
Lex Fridman (03:00:54) Yeah. Energy is flourishing. Creative flourishing of the human species.
George Hotz (03:00:59) How do we make more of it? How do we make it clean? And how do we make… How I pay 20 cents for a megawatt hour instead of a kilowatt hour?
Lex Fridman (03:01:08) Part of me wishes that Elon went into nuclear fusion versus Twitter, part of me. Or somebody like Elon.
George Hotz (03:01:20) I wish there were more Elons in the world. And I think Elon sees it as this is a political battle that needed to be fought. And again, I always ask the question of whenever I disagree with him, I remind myself that he is a billionaire and I’m not. So maybe he’s got something figured out that I don’t, or maybe he doesn’t
Lex Fridman (03:01:38) To have some humility. But at the same time, me as a person who happens to know him, I find myself in that same position. And sometimes even billionaires need friends who disagree and help them grow. And that’s a difficult reality.
George Hotz (03:01:57) And it must be so hard. It must be so hard to meet people once you get to that point where-
Lex Fridman (03:02:02) Fame, power, money, everybody’s sucking up to you.
George Hotz (03:02:05) See, I love not having shit. I don’t have shit man. Trust me. There’s nothing I can give you. There’s nothing worth taking from me.
Lex Fridman (03:02:12) Yeah. It takes a really special human being, when you have power, when you have fame, when you have money, to still think from first principles. Not all the adoration you get towards you, all the admiration, all the people saying, “Yes, yes, yes.”
George Hotz (03:02:26) And all the hate too.
Lex Fridman (03:02:29) And the hate-
George Hotz (03:02:29) I think that’s worse.
Lex Fridman (03:02:30) So the hate makes you want to go to the ‘yes’ people because the hate exhausts you. And the kind of hate that Elon’s gotten from the left, is pretty intense. And so that, of course, drives him right, and loses balance, and-
George Hotz (03:02:46) And it keeps this absolutely fake siop political divide alive, so that the 1% can keep power.
Lex Fridman (03:02:56) I wish we would be less divided because it is giving powr-
George Hotz (03:02:59) It gives power-
Lex Fridman (03:02:59) To the ultra powerful.
George Hotz (03:03:01) I know.
Lex Fridman (03:03:02) The rich get richer. You have love in your life. Has love made you a better or a worse programmer? Do you keep productivity metrics?
George Hotz (03:03:13) No, no, no. I’m not that methodical. I think there comes to a point where, if it’s no longer visceral, I just can’t enjoy it. I guess still, viscerally, love programming. The minute I started-
Lex Fridman (03:03:29) So that’s one of the big loves of your life, is programming?
George Hotz (03:03:33) I mean, just my computer in general. I mean, I tell my girlfriend, “My first love is my computer,” of course. I sleep with my computer. It’s there for a lot of my sexual experiences. Come on. So is everyone’s right. You got to be real about that. And-
Lex Fridman (03:03:48) Not just the ID for programming, just the entirety of the computational machine?
George Hotz (03:03:53) The fact that… Yeah. I wish it was a.. And someday they’ll be smarter, and someday [inaudible 03:03:59]. Maybe I’m weird for this, but I don’t discriminate, man. I’m not going to discriminate BioStack life in Silicon Stack life.
Lex Fridman (03:04:04) So the moment the computer starts to say, “I miss you,” and starts to have some of the basics of human intimacy, it’s over for you. The moment VS Code says, “Hey, George…”
George Hotz (03:04:16) No, no, no, but VS Code is… No, Microsoft’s doing that to try to get me hooked on it. I’ll see through it. I’ll see through it. It’s gold digger, man. It’s gold digger.
Lex Fridman (03:04:26) Well, it can be an open source thing.
George Hotz (03:04:27) Well, this just gets more interesting, right. If it’s open source, then yeah, it becomes-
Lex Fridman (03:04:31) Though, Microsoft’s done a pretty good job on that.
George Hotz (03:04:33) Oh, absolutely. No, no, no. Look, I think Microsoft… Again, I wouldn’t count on it to be true forever, but I think right now, Microsoft is doing the best work in the programming world, between GitHub, GitHub Actions VS Code, the improvements to Python, it was Microsoft.This is-
Lex Fridman (03:04:51) Who would’ve thought, Microsoft and Mark Zuckerberg are spearheading the open source movement.
George Hotz (03:04:57) Right? Right? How things change.
Lex Fridman (03:05:01) Oh, it’s beautiful.
George Hotz (03:05:03) And by the way, that’s who I bet on to replace Google, by the way.
Lex Fridman (03:05:06) Who?
George Hotz (03:05:06) Microsoft.
Lex Fridman (03:05:07) Microsoft.
George Hotz (03:05:08) I think Satya Nadella said straight up, “I’m coming for it.”
Lex Fridman (03:05:11) Interesting. So your bet, who wins AGI? That’s [inaudible 03:05:16]-
George Hotz (03:05:15) I don’t know about AGI. I think we’re a long way away from that. But I would not be surprised, if in the next five years, Bing overtakes Google as a search engine.
Lex Fridman (03:05:24) Interesting.
George Hotz (03:05:25) Wouldn’t surprise me.
Lex Fridman (03:05:26) Interesting. I hope some startup does.
George Hotz (03:05:33) It might be some startup too. I would equally bet on some startup.
Lex Fridman (03:05:37) Yeah. I’m like 50 50. But maybe that’s naive. I believe in the power of these language models.
George Hotz (03:05:43) Satya is alive. Microsoft’s alive.
Lex Fridman (03:05:45) Yeah, it’s great. It’s great. I like all the innovation in these companies. They’re not being stale, and to the degree they’re being stale, they’re losing. So there’s a huge incentive to do a lot of exciting work and open source work, this is incredible.
George Hotz (03:06:01) Only way to win.

Meaning of life

Lex Fridman (03:06:02) You’re older, you’re wiser. What’s the meaning of life, George Hotz?
George Hotz (03:06:08) To win.
Lex Fridman (03:06:09) It’s still to win?
George Hotz (03:06:10) Of course.
Lex Fridman (03:06:12) Always?
George Hotz (03:06:13) Of course.
Lex Fridman (03:06:14) What’s winning look like for you?
George Hotz (03:06:17) I don’t know. I haven’t figured out what the game is yet, but when I do, I want to win-
Lex Fridman (03:06:19) So it’s bigger than solving self-driving? It’s bigger than democratizing, decentralizing and compute?
George Hotz (03:06:29) I think the game is to stand eye to eye with God.
Lex Fridman (03:06:33) I wonder what that means for you. At the end of your life, what that would look like.
George Hotz (03:06:41) I mean, this is what… I don’t know. There’s probably some ego trip of mine. “You want to stand eye to eye with God. You’re just blasphemous, man.” Okay. I don’t know. I don’t know. I don’t know. I don’t know if it would upset God. I think he wants that. I mean, I certainly want that from my creations. I want my creations to stand eye to eye with me. So why wouldn’t God want me to stand eye to eye with him? That’s the best I can do, golden rule.
Lex Fridman (03:07:11) I’m just imagining the creator of a video game, having to look, stand eye to eye, with one of the characters.
George Hotz (03:07:22) I only watched season one of Westworld. But yeah, we got to find the maze and solve it.
Lex Fridman (03:07:27) Yeah. I wonder what that looks like. It feels like a really special time in human history, where that’s actually possible. There’s something about AI that’s… we’re playing with something weird here. Something really weird.
George Hotz (03:07:41) I wrote a blog post, “I reread Genesis and just looked like… they give you some clues at the end of Genesis for finding the Garden of Eden. And I’m interested. I’m interested.”
Lex Fridman (03:07:54) Well, I hope you find just that, George, you’re one of my favorite people. Thank you for doing everything you’re doing and in this case, for fighting for open source or for decentralization of AI. It’s a fight worth fighting, fight worth winning, hashtag. I love you, brother. These conversations are always great. Hope to talk to you many more times. Good luck with Tiny Corp.
George Hotz (03:08:15) Thank you. Great to be here.
Lex Fridman (03:08:17) Thanks for listening to this conversation with George Hotz. To support this podcast, please check out our sponsors in the description. And now, let me leave you with some words from Albert Einstein, “Everything should be made as simple as possible, but not simpler.” Thank you for listening and hope to see you next time.