Transcript for Elon Musk: War, AI, Aliens, Politics, Physics, Video Games, and Humanity | Lex Fridman Podcast #400

This is a transcript of Lex Fridman Podcast #400 with Elon Musk. The timestamps in the transcript are clickable links that take you directly to that point in the main video. Please note that the transcript is human generated, and may have errors. Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation. Click link to jump approximately to that part in the transcript:

Introduction

War and human nature

Lex Fridman (00:00:00) The following is a conversation with Elon Musk, his fourth time on this, the Lex Fridman Podcast. I thought you were going to finish it. It’s one of the greatest themes in all of film history.
Elon Musk (00:00:31) Yeah, that’s great.
Lex Fridman (00:00:33) So I was just thinking about the Roman Empire, as one does.
Elon Musk (00:00:38) Is that whole meme where all guys are thinking about the Roman Empire at least once a day?
Lex Fridman (00:00:44) And half the population is confused whether it’s true or not. But more seriously, thinking about the wars going on in the world today, and as you know, war and military conquest has been a big part of Roman society and culture, and I think has been a big part of most empires and dynasties throughout human history.
Elon Musk (00:01:06) Yeah, they usually came as a result of conquest. I mean, there’s some like the Hapsburg Empire where there was just a lot of clever marriages.
Lex Fridman (00:01:16) But fundamentally there’s an engine of conquest and they celebrate excellence in warfare, many of the leaders were excellent generals, that kind of thing. So a big picture question, Grok approved, I asked if this is a good question to ask.
Elon Musk (00:01:33) Tested, Grok approved. Yeah.
Lex Fridman (00:01:36) At least on fun mode. To what degree do you think war is part of human nature versus a consequence of how human societies are structured? I ask this as you have somehow controversially been a proponent of peace.
Elon Musk (00:01:57) I’m generally a proponent of peace. I mean, ignorance is perhaps, in my view, the real enemy to be countered. That’s the real hard part, not fighting other humans, but all creatures fight. I mean, the jungle is… People think of nature as perhaps some sort of peaceful thing, but in fact it is not. There’s some quite funny Werner Herzog thing where he is in the jungle saying that it’s basically just murder and death in every direction. The plants and animals in the jungle are constantly trying to kill each other every single day, every minute. So it’s not like we’re unusual in that respect.
Lex Fridman (00:02:40) Well, there’s a relevant question here, whether with greater intelligence comes greater control over these base instincts for violence.
Elon Musk (00:02:49) Yes. We have much more vulnerability to control our limbic instinct for violence than say a chimpanzee. And in fact, if one looks at say, chimpanzee society, it is not friendly. I mean, the Bonobos are an exception, but chimpanzee society is filled with violence and it’s quite horrific, frankly. That’s our limbic system in action. You don’t want to be on the wrong side of a chimpanzee, it’ll eat your face off and tear your nuts off.
Lex Fridman (00:03:22) Yeah. Basically there’s no limits or ethics or they almost had just war. There’s no just war in the chimpanzee societies. Is war and dominance by any means necessary?
Elon Musk (00:03:33) Yeah. Chimpanzee society is a permanent version of human society. They’re not like peace loving basically at all. There’s extreme violence and then once in a while, somebody who’s watched too many Disney movies decides to raise a chimpanzee as a pet, and then that eats their face or they’re nuts off or chew their fingers off and that kind of thing. It’s happened several times.
Lex Fridman (00:03:58) Ripping your nuts off is an interesting strategy for interaction.
Elon Musk (00:04:02) It’s happened to people. It’s unfortunate. That’s, I guess, one way to ensure that the other chimp doesn’t contribute to the gene pool.
Lex Fridman (00:04:10) Well, from a martial arts perspective is the fascinating strategy.
Elon Musk (00:04:15) The nut rougher.
Lex Fridman (00:04:18) I wonder which of the martial arts teaches that one.
Elon Musk (00:04:21) I think it’s safe to say if somebody’s got your nuts in their hands and as the option of roughing them off, you’ll be amenable to whatever they want.

Israel-Hamas war

Lex Fridman (00:04:30) Yeah. Safe to say. So, like I said, somehow controversially, you’ve been a proponent of peace on Twitter on X.
Elon Musk (00:04:38) Yeah.
Lex Fridman (00:04:39) So let me ask you about the wars going on today and to see what the path to peace could be. How do you hope the current war in Israel and Gaza comes to an end? What path do you see that can minimize human suffering in the longterm in that part of the world?
Elon Musk (00:04:54) Well, I think that part of the world is definitely, if you look up… There is no easy answer in the dictionary. It’ll be the picture of the Middle East in Israel especially. So there is no easy answer. This is strictly my opinion is that the goal of Hamas was to provoke an overreaction from Israel. They obviously did not expect to have a military victory, but they really wanted to commit the worst atrocities that they could in order to provoke the most aggressive response possible from Israel, and then leverage that aggressive response to rally Muslims worldwide for the course of Gaza and Palestine, which they have succeeded in doing. So the counterintuitive thing here, I think that the thing that I think should be done, even though it’s very difficult, is that I would recommend that Israel engage in the most conspicuous acts of kindness possible, everything, that is the actual thing that we’re taught the goal of Hamas.
Lex Fridman (00:06:19) So in some sense, the degree that makes sense in geopolitics turn the other cheek implemented.
Elon Musk (00:06:26) It’s not exactly turn the other cheek because I do think that it is appropriate for Israel to find the Hamas members and either kill them or incarcerate them. That’s something has to be done because they’re just going to keep coming otherwise. But in addition to that, they need to do whatever they can. There’s some talk of establishing, for example, a mobile hospital. I’d recommend doing that. Just making sure that there’s food, water, medical necessities and just be over the top about it and be very transparent. So [inaudible 00:07:22] can claim it’s a trick. Just put webcam on the thing or 24, 7.
Lex Fridman (00:07:29) Deploy acts of kindness.
Elon Musk (00:07:31) Yeah, conspicuous acts of kindness that are unequivocal, meaning they can’t be somehow because Hamas will then their response will be, “Oh, it’s a trick.” Therefore, you have to counter how it’s not a trick.
Lex Fridman (00:07:47) This ultimately fights the broader force of hatred in the region.
Elon Musk (00:07:51) Yes. And I’m not sure who said it, it’s an [inaudible 00:07:54] saying, but an eye for an eye makes everyone blind. Now, that neck of the woods, they really believe in the whole eye for an eye thing. But you really have… If you’re not going to just outright commit genocide against an entire people, which obviously would not be acceptable to really, shouldn’t be acceptable to anyone, then you’re going to leave basically a lot of people alive who subsequently hate Israel. So really the question is like for every Hamas member that you kill, how many did you create? And if you create more than you killed, you’ve not succeeded. That’s the real situation there. And it’s safe to say that if you kill somebody’s child in Gaza, you’ve made at least a few homeless members who will die just to kill an Israeli. That’s the situation. But I mean, this is one of the most contentious subjects one could possibly discuss. But I think if the goal ultimately is some sort of long-term piece, one has to look at this from the standpoint of over time, are there more or fewer terrorists being created?
Lex Fridman (00:09:26) Let me just linger on war.
Elon Musk (00:09:29) Yeah, war, safe to say, wars always existed and always will exist.
Lex Fridman (00:09:33) Always will exist.
Elon Musk (00:09:34) Always has existed and always will exist.
Lex Fridman (00:09:37) I hope not. You think it’ll always-
Elon Musk (00:09:42) There will always be war. There’s a question of just how much war and there’s sort of the scope and scale of war. But to imagine that there would not be any war in the future, I think would be a very unlikely outcome.
Lex Fridman (00:09:55) Yeah. You talked about the Culture series. There’s war even there.
Elon Musk (00:09:58) Yes. It’s a giant war. The first book starts off with a gigantic galactic war where trillions die trillions.
Lex Fridman (00:10:07) But it still nevertheless protects these pockets of flourishing. Somehow you can have galactic war and still have pockets of flourishing.
Elon Musk (00:10:18) Yeah, I guess if we are able to one day expand to fool the galaxy or whatever, there will be a galactic war at some point.
Lex Fridman (00:10:31) I mean, the scale of war has been increasing, increasing, increasing. It’s like a race between the scale of suffering and the scale of flourishing.
Elon Musk (00:10:38) Yes.

Military-Industrial Complex

Lex Fridman (00:10:41) A lot of people seem to be using this tragedy to beat the drums of war and feed the military industrial complex. Do you worry about this, the people who are rooting for escalation and how can it be stopped?
Elon Musk (00:10:56) One of the things that does concern me is that there are very few people alive today who actually viscerally understand the horrors of war, at least in the US. I mean, obviously there are people on the front lines in Ukraine and Russia who understand just how terrible war is, but how many people in the West understand it? My grandfather was in World War II. He was severely traumatized. He was there I think for almost six years in Eastern North Africa and Italy. All his friends were killed in front of him, and he would’ve died too, except they randomly gave some, I guess IQ test or something, and he scored very high. He was not an officer. He was I think a corporal or a sergeant or something like that because he didn’t finish high school because he had to drop out of high school because his dad died and he had to work to support his siblings. So because he didn’t graduate high school, he was not eligible for the offset corps.
(00:11:57) So he kind of got put into the cannon fodder category basically. But then randomly they gave him this test. He was transferred to British intelligence in London. That’s where we met my grandmother. But he had PTSD next level, next level. I mean, just didn’t talk, just didn’t talk. And if you tried talking to him, he’d just tell you to shut up. And he won a bunch of medals, never bragged about it once, not even hinted nothing. I found out about it because his military records were online. That’s how I know. So he would say like, “No way in hell do you want to do that again.” But how many people… Obviously, he died, he 20 years ago or longer, actually 30 years ago. How many people are alive that remember World War II? Not many.
Lex Fridman (00:12:54) And the same perhaps applies to the threat of nuclear war.
Elon Musk (00:13:01) Yeah, I mean, there are enough nuclear bombs pointed at United States to make the radioactive revel balance many times.
Lex Fridman (00:13:10) There’s two major wars going on right now. So you talked about the threat of AGI quite a bit, but now as we sit here with the intensity of conflict going on, do you worry about nuclear war?
Elon Musk (00:13:25) I think we shouldn’t discount the possibility of nuclear war. It is a civilizational threat. Right now, I could be wrong, but I think the current probability of nuclear war is quite low. But there are a lot of nukes pointed at us, and we have a lot of nukes pointed at other people. They’re still there. Nobody’s put their guns away. The missiles are still in the silos.
Lex Fridman (00:13:57) And the leaders don’t seem to be the ones with the nukes talking to each other.
Elon Musk (00:14:03) No, there are wars which are tragic and difficult on a local basis. And then there are wars which are civilization ending or has that potential. Obviously, global thermonuclear warfare has high potential to end civilization, perhaps permanently, but certainly to severely wound and perhaps set back human progress to the Stone Age or something. I don’t know. Pretty bad. Probably scientists and engineers want to be super popular after that as well. You got us into this mess. So generally, I think we obviously want to prioritize civilizational risks over things that are painful and tragic on a local level, but not civilizational.

War in Ukraine

Lex Fridman (00:15:00) How do you hope the war in Ukraine comes to an end? And what’s the path, once again to minimizing human suffering there?
Elon Musk (00:15:08) Well, I think that what is likely to happen, which is really pretty much the way it is, is that something very close to the current lines will be how a ceasefire or truce happens. But you just have a situation right now where whoever goes on the offensive will suffer casualties at several times the rate of whoever’s on the defense because you’ve got defense in depth, you’ve got minefields, trenches, anti-tank defenses. Nobody has air superiority because the anti-aircraft missiles are really far better than the aircraft. They’re far more of them. And so neither side has air superiority. Tanks are basically death traps, just slow moving, and they’re not immune to anti-tank weapons. So you really just have long range artillery and infantry ranges. It’s World War I all over again with drones, thrown old drones, some drones there.
Lex Fridman (00:16:25) Which makes the long range artillery just that much more accurate and better, and so more efficient at murdering people on both sides.
Elon Musk (00:16:34) So whoever is… You don’t want to be trying to advance from either side because the probability of dying is incredibly high. So in order to overcome defense in depth, trenches and minefields, you really need a significant local superiority in numbers. Ideally combined alms where you do a fast attack with aircraft, a concentrated number of tanks, and a lot of people, that’s the only way you’re going to punch through a line and then you’re going to punch through and then not have reinforcements just kick you right out again. I mean, I really recommend people read World War I warfare in detail. That’s rough. I mean, the sheer number of people that died there was mind-boggling.
Lex Fridman (00:17:37) And it’s almost impossible to imagine the end of it that doesn’t look like almost exactly like the beginning in terms of what land belongs to who and so on. But on the other side of a lot of human suffering, death and destruction of infrastructure.
Elon Musk (00:17:56) Yes. The thing that… The reason I proposed some sort of truce or peace a year ago was because I’ve predicted pretty much exactly what would happen, which is a lot of people dying for basically almost no changes in land and the loss of the flower of Ukrainian and Russian youth. And we should have some sympathy for the Russian boys as well as the Ukrainian boys, because Russian boys, because boys didn’t ask to be on their frontline. They have to be. So there’s a lot of sons not coming back to their parents, and I think most of them don’t hate the other side. It’s sort of like as this saying comes from World War I, it’s like young boys who don’t know each other killing each other on behalf of old men that do know each other. The hell’s the point of that.
Lex Fridman (00:19:02) So Volodymyr Zelenskyy said that he’s not, or has said in the past, he’s not interested in talking to Putin directly. Do you think he should sit down man to man, lead a leader, and negotiate peace?
Elon Musk (00:19:14) Look, I think I would just recommend do not send the flower of Ukrainian youth to die in trenches, whether he talks to Putin or not, just don’t do that. Whoever goes on the offensive will lose massive numbers of people and history will not look kindly upon them.

China

Lex Fridman (00:19:42) You’ve spoken honestly about the possibility of war between US and China in the longterm if no diplomatic solution is found, for example, on the question of Taiwan and One China policy, how do we avoid the trajectory where these two superpowers clash?
Elon Musk (00:19:58) Well, it’s worth reading that book on the, difficult to pronounce, the Thucydides Trap, I believe it’s called. I love war history. I like inside out and backwards. There’s hardly a battle I haven’t read about. And trying to figure out what really was the cause of victory in any particular case as opposed to what one side or another claim the reason.
Lex Fridman (00:20:21) Both the victory and what sparked the war and-
Elon Musk (00:20:24) Yeah, yeah.
Lex Fridman (00:20:25) The whole thing.
Elon Musk (00:20:26) Yeah. So that Athens and Sparta is a classic case. The thing about the Greek is they really wrote down a lot of stuff. They loved writing. There are lots of interesting things that happened in many parts of the world, but people didn’t write down, so we don’t know what happened or they didn’t really write in detail. They just would say, “We had a battle and we won.” And what? Can you add a bit more? The Greeks, they really wrote a lot. They were very articulate on… They just love writing. And we have a bunch of that writing as preserved. So we know what led up to the Peloponnesian War between the Spartanand Athenian Alliance, and we know that they saw it coming.
(00:21:16) Spartans didn’t write… They also weren’t very verbose by their nature, but they did write, but they weren’t very verbose. They were [inaudible 00:21:23]. But the Athenians and the other Greeks wrote a line, and Spartan was really kind of like the leader of Greece. But Athens grew stronger and stronger with each passing year. And everyone’s like, “Well, that’s inevitable that there’s going to be a clash between Athens and Sparta. Well, how do we avoid that?” And actually they saw it coming and they still could not avoid it. So at some point, if one group, one civilization or country or whatever exceeds another sort of like the United States has been the biggest kid on the block since I think around 1890 from an economic standpoint.
(00:22:14) So the United States has been the most powerful economic engine in the world longer than anyone’s been alive. And the foundation of war is economics. So now we have a situation in the case of China where the economy is likely to be two, perhaps three times larger than that of the US. So imagine you’re the biggest kid on the block for as long as anyone can remember, and suddenly a kid comes along who’s twice your size.
Lex Fridman (00:22:55) So we see it coming, how is it possible to stop? Let me throw something out there, just intermixing of cultures understanding there does seem to be a giant cultural gap in understanding of each other. And you’re an interesting case study because you are an American, obviously you’ve done a lot of incredible manufacture here in the United States, but you also work with China.
Elon Musk (00:23:20) I’ve spent a lot of time in China and met with the leadership many times.
Lex Fridman (00:23:22) Maybe a good question to ask is, what are some things about China that people don’t understand, positive just in the culture? What’s some interesting things that you’ve learned about the Chinese?
Elon Musk (00:23:36) Well, the sheer number of really smart, hardworking people in China is incredible. There are really say how many smart, hardworking people are there in China? There’s far more of them there than there are here, I think, in my opinion. And they’ve got a lot of energy. So I mean, the architecture in China that’s in recent years is far more impressive than the US. I mean the train stations, the buildings, the high speed rail, everything, it’s really far more impressive than what we have in the US. I mean, I recommend somebody just go to Shanghai and Beijing, look at the buildings and go to take the train from Beijing to Xian, where you have the terracotta warriors. China’s got an incredible history, very long history, and I think arguably in terms of the use of language from a written standpoint, one of the oldest, perhaps the oldest written language, and then China, people did write things down.
(00:24:50) So now China historically has always been, with rare exception, been internally focused. They have not been inquisitive. They’ve fought each other. There’ve been many, many civil wars. In the Three Kingdoms war, I believe they lost about 70% of their population. So they’ve had brutal internal wars, civil wars that make the US Civil War look small by comparison. So I think it’s important to appreciate that China is not monolithic. We sort of think of China as a sort of one entity of one mind. And this is definitely not the case. From what I’ve seen and I think most people who understand China would agree, people in China think about China 10 times more than they think about anything outside of China. So it’s like 90% of their consideration is internal.
Lex Fridman (00:26:01) Well, isn’t that a really positive thing when you’re talking about the collaboration and the future piece between superpowers when you’re inward facing, which is focusing on improving yourself versus focusing on quote, unquote improving others through military might.
Elon Musk (00:26:18) The good news, the history of China suggests that China is not inquisitive, meaning they’re not going to go out and invade a whole bunch of countries. Now they do feel very strongly… So that’s good. I mean, because a lot of very powerful countries have been inquisitive. The US is also one of the rare cases that has not been inquisitive. After World War II, the US could have basically taken over the world in any country, we’ve got nukes, nobody else has got nukes. We don’t even have to lose soldiers. Which country do you want? And the United States could have taken over everything and it didn’t. And the United States actually helped rebuild countries. So it helped rebuild Europe, helped rebuild Japan. This is very unusual behavior, almost unprecedented.
(00:27:10) The US did conspicuous acts of kindness like the Berlin Airlift. And I think it’s always like, well, America’s done bad things. Well, of course America’s done bad things, but one needs to look at the whole track record and just generally, one sort of test would be how do you treat your prisoners at war? Or let’s say, no offense to the Russians, but let’s say you’re in Germany, it’s 1945, you’ve got the Russian Army coming one side and you’ve got the French, British and American Army’s coming the other side, who would you like to be just surrendered to? No country is [inaudible 00:27:58] perfect, but I recommend being a POW with the Americans. That would be my choice very strongly.
Lex Fridman (00:28:07) In the full menu of POWs in the US.
Elon Musk (00:28:08) Very much so. And in fact, Wernher von Braun, a smart guy, was like, “We’ve got to be captured by the Americans.” And in fact, the SS was under orders to execute von Braun and all of the German rocket conditioners, and they narrowly escaped. They said they were going out for a walk in the woods. They left in the middle of winter with no coats and then ran, but no food, no coats, no water, and just ran like hell and ran West and Vice Sherlock, I think his brother found a bicycle or something and then just cycled West as fast as he couldn’t have found a US patrol. So anyway, that’s one way you can tell morality is where do you want to be a PW? It’s not fun anywhere, but some places are much worse than others. Anyway, so America has been, while far from perfect, generally a benevolent force, and we should always be self-critical and we try to be better, but anyone with half a brain knows that.
(00:29:31) So I think there are… In this way, China and the United States are similar. Neither country has been acquisitive in a significant way. So that’s a shared principle, I guess. Now, China does feel very strongly about Taiwan. They’ve been very clear about that for a long time. From this standpoint, it would be like one of the states is not there like Hawaii or something like that but more significant than Hawaii. And Hawaii is pretty significant for us. So they view it as really there’s a fundamental part of China, the island of Formosa, not Taiwan, that is not part of China, but should be. And the only reason it hasn’t been is because the US Pacific fleet.
Lex Fridman (00:30:32) And is their economic power grows and is their military power grows, the thing that they’re clearly saying is their interest will clearly be materialized.
Elon Musk (00:30:46) Yes, China has been very clear that they’ll incorporate Taiwan peacefully or militarily, but that they will incorporate it from their standpoint is 100% likely.
Lex Fridman (00:31:04) Something you said about conspicuous acts of kindness as a geopolitical policy, it almost seems naive, but I’d venture to say that this is probably the path forward, how you avoid most wars. Just as you say it sounds naive, but it’s kind of brilliant. If you believe in the goodness of underlying most of human nature, it just seems like conspicuous acts of kindness can reverberate through the populace of the countries involved and deescalate.
Elon Musk (00:31:44) Absolutely. So after World War I, they made a big mistake. They basically tried to lump all of blame on Germany and saddle Germany with impossible reparations. And really there was quite a bit of blame to go around for World War I, but they try to put it all in Germany and that laid the seeds for World War II. So a lot of people, were not just Hitler, a lot of people felt wronged and they wanted vengeance and they got it.
Lex Fridman (00:32:38) People don’t forget.
Elon Musk (00:32:41) Yeah, you kill somebody’s father, mother or son, daughter, they’re not going to forget it. They’ll want vengeance. So after World War II, they’re like, “Well, the Treaty of Versi was a huge mistake in World War I. And so this time, instead of crushing the losers, we’re actually going to help them with the module plan, and we’re going to help rebuild Germany. We’re going to help rebuild Austria and Italy and whatnot.” So that was the right move.
Lex Fridman (00:33:26) It does feel like there’s a profound truth to the conspicuous acts of kindness being an antidote to this.
Elon Musk (00:33:37) Something must stop the cycle of reciprocal violence. Something must stop it, or it’ll never stop. Just eye for an eye, tooth for a tooth, limb for a limb, life for a life forever and ever.

xAI Grok

Lex Fridman (00:33:57) To escape briefly the darkness, was some incredible engineering work, xAI just released Grok AI assistant that I’ve gotten a chance to play with. It’s amazing on many levels. First of all, it’s amazing that a relatively small team in a relatively short amount of time was able to develop this close to state-of-the-art system. Another incredible thing is there’s a regular mode and there’s a fun mode.
Elon Musk (00:34:23) Yeah, I guess I’m to blame for that one.
Lex Fridman (00:34:27) First of all, I wish everything in life had a fun mode.
Elon Musk (00:34:29) Yeah.
Lex Fridman (00:34:30) There’s something compelling beyond just fun about the fun mode interacting with a large language model. I’m not sure exactly what it is because I’ve only have had a little bit of time to play with it, but it just makes it more interesting, more vibrant to interact with the system.
Elon Musk (00:34:47) Yeah, absolutely. Our AI, Grok, is modeled after The Hitchhiker’s Guide to the Galaxy, which is one of my favorite books, which it’s a book on philosophy. It’s-
Elon Musk (00:35:00) My favorite books, it’s a book on philosophy, disguises book on humor. And I would say that forms the basis of my philosophy, which is that we don’t know the meaning of life, but the more we can expand the scope and scale of consciousness, digital and biological, the more we’re able to understand what questions to ask about the answer that is the universe. So I have a philosophy of curiosity.
Lex Fridman (00:35:34) There is generally a feeling like this AI system has an outward looking, like the way you are sitting with a good friend looking up at the stars, asking pod head like questions about the universe, wondering what it’s all about. The curiosity that you talk about. No matter how mundane the question I ask it, there’s a sense of cosmic grandeur to the whole thing.
Elon Musk (00:35:59) Well, we are actually working hard to have engineering math, physics answers that you can count on. So for the other AIs out there, these so-called large language models, I’ve not found the engineering to be reliable. It unfortunately hallucinates most when you at least want it to hallucinate. So when you’re asking important, difficult questions, that’s when it tends to be confidently wrong. So we’re really trying hard to say, okay, how do we be as grounded as possible? So you can count on the results, trace things back to physics first principles, mathematical logic. So underlying the humor is an aspiration to adhere to the truth of the universe as closely as possible.
Lex Fridman (00:37:01) That’s really tricky.
Elon Musk (00:37:02) It is tricky. So that’s why there’s always going to be some amount of error. But do we want to aspire to be as truthful as possible about the answers with acknowledged error. So that there was always, you don’t want to be confidently wrong, so you’re not going to be right every time, but you want to minimize how often you’re confidently wrong. And then like I said, once you can count on the logic as being not violating physics, then you can start to bull on that to create inventions, like invent new technologies. But if you cannot count on the foundational physics being correct, obviously the inventions are simply wishful thinking, imagination land. Magic basically.
Lex Fridman (00:38:01) Well, as you said, I think one of the big goals of XAI is to understand the universe.
Elon Musk (00:38:06) Yes, that’s how simple three word mission.
Lex Fridman (00:38:13) If you look out far into the future, do you think on this level of physics, the very edge of what we understand about physics, do you think it will make the sexiest discovery of them as we know now, unifying general relativity and quantum mechanics? So coming up with a theory of everything, do you think it could push towards that direction, almost like theoretical physics discoveries?
Elon Musk (00:38:38) If an AI cannot figure out new physics, it’s clearly not equal to humans, nor has surpassed humans because humans have figured out new physics. Physics is just deepening what’s inside into how reality works. And then there’s engineering which is inventing things that have never existed. Now the range of possibilities for engineering is far greater than for physics because once you figure out the rules of the universe, that’s it. You’ve discovered things that already existed. But from that you can then build technologies that are really almost limitless in the variety. And it’s like once you understand the rules of the game properly, and with current physics, we do at least at a local level, understand how physics works very well. Our ability to predict things is incredibly good. Degree to which quantum mechanics can predict outcomes is incredible. That was my hardest class in college by the way. My senior quantum mechanics class was harder than all of my other classes put together.
Lex Fridman (00:39:50) To get an AI system, a large language model be as reliable as quantum mechanics and physics is very difficult.
Elon Musk (00:40:01) Yeah. You have to test any conclusions against the ground truth of reality. Reality is the ultimate judge. Like physics is the law, everything else is a recommendation. I’ve seen plenty of people break the laws made by man, but none break the laws made by physics.
Lex Fridman (00:40:15) It’s a good test actually. If this LLM understands and matches physics, then you can more reliably trust whatever it thinks about the current state of politics in some sense.
Elon Musk (00:40:28) And it’s also not the case currently that even that its internal logic is not consistent. So especially with the approach of just predicting a token predict token, predict token, it’s like a vector sum. You’re summing up a bunch of vectors, but you can get drift. A little bit of error adds up and by the time you are many tokens down the path, it doesn’t make any sense.
Lex Fridman (00:40:59) So it has to be somehow self-aware about the drift.
Elon Musk (00:41:02) It has to be self-aware about the drift, and then look at the thing as a gestalt as a whole and say it doesn’t have coherence as a whole. When authors write books, they will write the book and then they’ll go and revise it, take into account all the end and the beginning and the middle and rewrite it to achieve coherence so that it doesn’t end up at a nonsensical place.
Lex Fridman (00:41:33) Maybe the process of revising is what reasoning is, and then the process of revising is how you get closer and closer to truth. At least I approached that way, you just say a bunch of bullshit first and then you get it better. You start a bullshit and then you-
Elon Musk (00:41:51) Create a draft and then you iterate on that draft until it has coherence, until it all adds up basically.
Lex Fridman (00:41:59) Another question about theory of everything, but for intelligence, as you’re exploring this with XAI, creating this intelligence system? Do you think there is a theory of intelligence where you get to understand what is the I in AGI and what is the I in human intelligence?
Elon Musk (00:42:22) No, I in team America. Wait, there is.
Lex Fridman (00:42:24) No, it’s going to be stuck in my head now. Yeah, there’s no me and whatever in quantum mechanics, wait. I mean is that part of the process of discovering, understanding the universe is understanding intelligence?
Elon Musk (00:42:50) Yeah. I think we need to understand intelligence, understand consciousness. I mean there are some fundamental questions of what is thought, what is emotion? Is it really just one atom bumping into another atom? It feels like something more than that. So I think we’re probably missing some really big things.
Lex Fridman (00:43:18) Something that’ll be obvious in retrospect. You put the whole consciousness and motion.
Elon Musk (00:43:26) Well, some people would quote like a soul religion, be a soul. You feel like you’re you, I mean you don’t feel like you’re just a collection of atoms, but on what dimension does thought exist? What dimension does do emotions exist? Because we feel them very strongly. I suspect there’s more to it than atoms bumping into atoms.
Lex Fridman (00:43:52) And maybe AI can pave the path to the discovery whatever the hell that thing is.
Elon Musk (00:43:58) Yeah. What is consciousness? When you put the atoms in a particular shape, why are they able to form thoughts and take actions and feelings?
Lex Fridman (00:44:10) And even if it is an illusion, why is this illusion so compelling?
Elon Musk (00:44:13) Yeah. Why does the solution exist? On what plane does the solution exist? And sometimes I wonder is either perhaps everything’s conscious or nothing’s conscious. One of the two.
Lex Fridman (00:44:33) Like the former, everything conscious just seems more fun.
Elon Musk (00:44:37) It does seem more fun, yes. But we’re composed of atoms and those atoms are composed of quarks and leptons and those quarks and leptons have been around since the beginning of the universe.
Lex Fridman (00:44:50) “The beginning of the universe.”
Elon Musk (00:44:53) What seems to be the beginning of the universe.

Aliens

Lex Fridman (00:44:55) The first time we talked, you said, which is surreal to think that this discussion was happening is becoming a reality. I asked you what question would you ask an AGI system once you create it? And you said, “What’s outside the simulation,” is the question. Good question. But it seems like with Grok you started literally the system’s goal is to be able to answer such questions and to ask such questions.
Elon Musk (00:45:24) Where are the aliens?
Lex Fridman (00:45:25) Where are the aliens?
Elon Musk (00:45:26) That’s one of the foam paradox question. A lot of people have asked me if I’ve seen any evidence of aliens and I haven’t, which is kind of concerning. I think I’d probably prefer to at least have seen some archeological evidence of aliens. To the best of my knowledge, I’m not aware of any evidence surveillance. If they’re out there, they’re very subtle. We might just be the only consciousness, at least in the galaxy. And if you look at say the history of Earth, to believe the archeological record Earth is about four and a half billion years old. Civilization as measured from the first writing is only about 5,000 years old. We have to give some credit there to the ancient Sumerians who aren’t around anymore. I think it was an archaic pre-form was the first actual symbolic representation, but only about 5,000 years ago. I think that’s a good date for when we say civilization started. That’s 1000000th of Earth’s existence.
(00:46:35) So civilization has been around. It’s really a flash in the pan so far. And why did it take so long? Four and a half billion years, for the vast majority of the time, there was no life. And then there was archaic bacteria for a very long time. And then you had mitochondria get captured, multicellular life, differentiation into plants and animals, life moving from the oceans to land, mammals, higher brain functions. And the sun is expanding slowly but it’ll heat the earth up at some point in the future, boil the oceans and earth will become like Venus, where life as we know it is impossible. So if we do not become multiplanetary and ultimately solar system, annihilation of all life on earth is a certainty. A certainty. And it could be as little as on the galactic timescale, half a billion years, long time by human standards, but that’s only 10% longer than earth has been around at all. So if life had taken 10% longer to evolve on earth, it wouldn’t exist at all.
Lex Fridman (00:48:27) Glad a deadline coming up, you better hurry. But that said, as you said, humans intelligent life on earth developed a lot of cool stuff very quickly. So it seems like becoming a multiplanetary is almost inevitable. Unless we destroy-
Elon Musk (00:48:45) We need to do it. I suspect that if we are able to go out there and explore other star systems that we… There’s a good chance we find a whole bunch of long dead one planet civilizations that never made it past their home planet.
Lex Fridman (00:49:03) That’s so sad. Also fascinating.
Elon Musk (00:49:08) I mean there are various explanations for paradox and one is they’re these great vultures which civilizations don’t pass through. And one of those great vultures is do you become a multi-plan civilization or not? And if you don’t, it’s simply a matter of time before something happens on your planet, either natural or manmade that causes us to die out. Like the dinosaurs, where are they now? They didn’t have spaceships.
Lex Fridman (00:49:42) I think the more likely thing is because just to empathize with the aliens that they found us and they’re protecting us and letting us be.
Elon Musk (00:49:51) I hope so. Nice aliens.
Lex Fridman (00:49:53) Just like the tribes in the Amazon, the uncontacted tribes or protecting them. That’s what-
Elon Musk (00:49:59) That would be a nice explanation.
Lex Fridman (00:50:00) Or you could have, what was it? I think Andre Kappelhoff said, “It’s like the ants and the Amazon asking where’s everybody?”
Elon Musk (00:50:10) Well, they do run into a lot of other ants.
Lex Fridman (00:50:12) That’s true.
Elon Musk (00:50:14) These ant wars.
Lex Fridman (00:50:16) Sounds like a good TV show.
Elon Musk (00:50:18) Yeah. They literally have these big wars between various ants.
Lex Fridman (00:50:21) Yeah. Maybe I’m just dismissing all the different diversity of ants.
Elon Musk (00:50:28) Listen to that Werner Herzog talking about the jungle. It’s really hilarious. Have you heard it?
Lex Fridman (00:50:31) No, I have not. But Werner Herzog is a way.
Elon Musk (00:50:37) You should play it as an interlude in the… It’s on YouTube. It’s awesome.
Lex Fridman (00:50:45) I love him so much.
Elon Musk (00:50:47) He’s great.
Lex Fridman (00:50:47) Was he the director of happy people life and the Taiga? I think also-
Elon Musk (00:50:51) He did that bear documentary. I did this thing about penguins.
Lex Fridman (00:50:58) The psycho analysis of a penguin.
Elon Musk (00:51:00) Yeah. The penguins headed for mountains that are 70 miles away and penguin is just headed for dom, basically.
Lex Fridman (00:51:08) Well, he had a cynical take. He could be just a brave explorer and there’ll be great stories told about him amongst the penguin population for many centuries to come. What were we talking about? Okay.
Elon Musk (00:51:28) Yeah. So aliens, I mean, I don’t know. Look, I think the smart move is just this is the first time in the history of earth that it’s been possible for life to extend beyond earth. That window is open. Now it may be open for a long time or it may be open for a short time and it may be open now and then never open again. So I think the smart move here is to make life multiplanetary while it’s possible to do so. We don’t want to be one of those lame one planet civilizations that just dies out.
Lex Fridman (00:52:04) No, those are lame.
Elon Musk (00:52:05) Yeah. Lame. Self-respecting, civilization would be one planet.
Lex Fridman (00:52:11) There’s not going to be a Wikipedia entry for one of those. Do SpaceX have an official policy for when we meet aliens?
Elon Musk (00:52:23) No.
Lex Fridman (00:52:24) That seems irresponsible.
Elon Musk (00:52:30) I mean, look, if I see the slightest indication that there are aliens, I will immediately post on X platform anything I know.
Lex Fridman (00:52:38) It could be the most liked reposted post of all time.
Elon Musk (00:52:42) Yeah. I mean, look, we have more satellites up there right now than everyone else combined. So we know if we’ve got a maneuver around something and we don’t have to maneuver around anything.

God

Lex Fridman (00:52:55) If we go to the big questions once again, you said you’re with Einstein, that you believe in the goddess Spinoza.
Elon Musk (00:53:04) Yes.
Lex Fridman (00:53:05) So that’s that view that God is like the universe and reveals himself through the laws of physics or as Einstein said, “Through the lawful harmony of the world.”
Elon Musk (00:53:16) Yeah. I would agree that God of the simulator or whatever the supreme beings reveal themselves through the physics, they have creatives of this existence and incumbent upon us to try to understand more about this one creation.
Lex Fridman (00:53:38) Who created this thing? Who’s running this thing? Embodying it into a singular question with a sexy word on top of it is focusing the mind to understand. It does seem like there’s a, again, it could be an illusion. It seems like there’s a purpose that there’s an underlying master plan of some kind, and it seems like.
Elon Musk (00:53:58) There may not be a master plan in the sense. So maybe an interesting answer to the question of determinism versus free will is that if we are in a simulation, the reason that these higher beings would hold a simulation is to see what happens. So they don’t know what happens otherwise they wouldn’t hold the simulation. So when humans create a simulation, so it’s SpaceX and Tesla, we create simulations all the time. Especially for the rocket, you have to run a lot of simulations to understand what’s going to happen because you can’t really test the rocket until it goes to space and you want it to work. So you have to simulate subsonic, transonic, supersonic, hypersonic, ascend, and then coming back, super high heating and orbital dynamics. All this has got to be simulated because you don’t get very many kicks at the can. But we run the simulations to see what happens, not if we knew what happens, we wouldn’t run the simulation. So whoever created this existence, they’re running it because they don’t know what’s going to happen, not because they do.

Diablo 4 and video games

Lex Fridman (00:55:23) So maybe we both played Diablo. Maybe Diablo was created to see if Druid, your character, could defeat Uber Lilith at the end. They didn’t know.
Elon Musk (00:55:34) Well, the funny thing is Uber Lilith, her title is Hatred Incarnate. And right now, I guess you can ask the Diablo team, but it’s almost impossible to defeat Hatred in the eternal realm.
Lex Fridman (00:55:55) Yeah. You’ve streamed yourself dominating Tier 100 Nightmare Dungeon. And still-
Elon Musk (00:56:00) I can cruise through Tier 100 Nightmare Dungeon like a stroll in the park.
Lex Fridman (00:56:07) And still you’re defeated by Hatred?
Elon Musk (00:56:09) Yeah. I guess maybe the second hardest boss is Duriel. Duriel can even scratch the paint. So I killed Duriel so many times and every other boss in the game, all of them kill him so many times, it’s easy. But Uber Lilith, otherwise known as Hatred Incarnate, especially if you’re Duriel and you have no ability to go to be vulnerable, there are these random death waves that come at you.
(00:56:44) Really I am 52, so my reflex is not what they used to be, but I have a lifetime of playing video games. At one point, I was maybe one of the best quake players in the world. I actually won money in what I think was the first paid eSports tournament in the US. We were doing four person quake tournaments and I was the second best person on the team and the actual best person that… We were actually winning, we would’ve come first, except the best person on the team. His computer crashed halfway through the game. So we came second, but I got money for it and everything. So basically I got skills, albeit no spring chicken these days. And to be totally frank, it’s driving me crazy to beat Lilith as a Druid, basically trying to beat Hatred Incarnate in the eternal realm.
Lex Fridman (00:57:40) As a Druid.
Elon Musk (00:57:41) As a Druid. This is really vexing, let me tell you.
Lex Fridman (00:57:49) I mean, the challenge is part of the fun. I have seen directly, you’re actually a world-class, incredible video game player. And I think Diablo, so you’re just picking up a new game and you’re figuring out its fundamentals. You’re also with the Paragon Board and the build are not somebody like me who perfectly follows whatever they suggest on the internet. You’re also an innovator there, which is hilarious to watch. It’s like a mad scientist just trying to figure out the Paragon Board and the build. Is there some interesting insights there about if somebody’s starting as a druid, do you have advice?
Elon Musk (00:58:30) I would not recommend playing a druid in the eternal realm. Right now I think the most powerful character in the seasonal realm is the Sorcerer with the lightning balls. The smokes have huge balls in the seasonal.
Lex Fridman (00:58:46) Yeah, that’s what they say.
Elon Musk (00:58:49) So have huge balls. They do huge balls of lightning.
Lex Fridman (00:58:54) I’ll take you word for it.
Elon Musk (00:58:57) In the seasonal realm, it’s pretty easy to beat Uber Lilith because you get these vapor powers that out amplify your damage and increase your defense and whatnot. So really quite easy to defeat Hatred seasonally, but to defeat Hatred eternally very difficult, almost impossible. It’s very impossible. It seems like a metaphor for life.
Lex Fridman (00:59:24) Yeah. I like the idea that Elon Musk, because I was playing Diablo yesterday and I saw Level 100 Druid just run by, I will never die and then run back the other way. And this metaphor, it’s hilarious that you, Elon Musk is restlessly, fighting Hatred in this demonic realm.
Elon Musk (00:59:47) Yes.
Lex Fridman (00:59:48) It’s hilarious. I mean it’s pretty hilarious.
Elon Musk (00:59:50) No, it’s absurd. Really, it’s exercise and absurdity and it makes me want to pull my hair out.
Lex Fridman (00:59:57) Yeah. What do you get from video games in general, for you personally?
Elon Musk (01:00:03) I don’t know. It calms my mind. I mean, killing the demons in a video game calms the demons in my mind. If you play a tough video game, you can get into a state of flow, which is very enjoyable. Admittedly, it needs to be not too easy, not too hard, kind of in the Goldilocks zone, and I guess you generally want to feel like you’re progressing in the game. A good video, and there’s also beautiful art, engaging storylines, and it’s like an amazing puzzle to solve, I think. So it’s like solving the puzzle.
Lex Fridman (01:00:52) Elden Ring the greatest game of all time. I still haven’t played it, but to you-
Elon Musk (01:00:56) Elden Ring is definitely a candidate for best game ever. Top five for sure.
Lex Fridman (01:01:01) I think I’ve been scared how hard it is or how hard I hear it is, but it’s beautiful.
Elon Musk (01:01:06) Elden Ring, feels like it’s designed by an alien.
Lex Fridman (01:01:13) It’s a theme to this discussion. In what way?
Elon Musk (01:01:17) It’s so unusual. It’s incredibly creative, and the art is stunning. I recommend playing it on a big resolution, high dynamic raised TV even. It doesn’t need to be a monitor. Just the art is incredible. It’s so beautiful and it’s so unusual, and each of those top bus battles is unique. It’s a unique puzzle to solve. Each one’s different and the strategy you use to solve one battle is different from another battle.
Lex Fridman (01:01:54) That said, you said Druid, an internal against Uber Lilith is the hardest boss battle you’ve ever…
Elon Musk (01:02:00) Correct. That is currently the, and I’ve played a lot of video games because that’s my primary recreational activity. And yes, beating Hatred in the internal realm is the hardest bus battle in life. And in the video game. I’m not sure it’s possible, but I do make progress. So then I’m like, ” Okay. I’m making progress. Maybe if I just tweak that paragon board a little more, I can do it could.” Just dodge a few more waves, I could do it.
Lex Fridman (01:02:43) Well, the simulation is created for the purpose of figuring out if it can be done, and you’re just a cog in the machine of the simulation.
Elon Musk (01:02:51) Yeah, it might be. I have a feeling that at least I think-
Lex Fridman (01:03:02) It’s doable.
Elon Musk (01:03:03) It’s doable. Yes.
Lex Fridman (01:03:05) Well, that’s the human spirit right there to believe.
Elon Musk (01:03:09) Yeah. I mean, it did prompt me to think about just hate in general, which is you want to be careful of one of those things where you wish for something that sounds good, but if you get it’s actually a dystopian situation. So if you wish for world peace sounds good, but how’d it enforced and at what cost eternal peace? It might actually be worse to have eternal peace because of what that would entail. The suppression of everyone, it might be the suppression of progress. It might be an ossified society that never changes. What I’m saying is that there is an argument that if you wish for no war, you should be careful what you wish for because what’s required in order for there to be no war might be worse than a little war.

Dystopian worlds: 1984 and Brave New World

Lex Fridman (01:04:22) Yeah. The things we’ll consider to be flaws of human civilization might be a necessary components for whatever optimal looks like. I mean this, do you worry about AI, AGI enabling a dystopian state of this nature, whether it’s 1984 with surveillance and fear or brave new world with pleasure and what is it? Lots of sex, but no deep human experience.
Elon Musk (01:04:52) There’s actually a real drug called Soma.
Lex Fridman (01:04:54) There is?
Elon Musk (01:04:54) Yes.
Lex Fridman (01:04:56) I’m going to take notes.
Elon Musk (01:04:58) Does Gong know about it? No. He probably will go to Brave New World.
Elon Musk (01:05:02) We got to ask it.
Lex Fridman (01:05:03) What would be the question?
Elon Musk (01:05:04) Tell us about the drug called Soma.
Lex Fridman (01:05:05) Tell us about the drug called Soma. It’s going to go to Brave New World, I think.
Elon Musk (01:05:15) Well, there’s a real drug as well.
Lex Fridman (01:05:18) Searching for Soma drug information. Oh boy. Soma, the wonder drug… By the way, I’m in fun mode.
Elon Musk (01:05:27) Okay, fun. Yeah. So it’s talking about the real drug.
Lex Fridman (01:05:31) It’s a muscle relaxant that’s been around since the 1950s. It’s used to treat muscle pain and discomfort, and it does this by altering the activity between neurons and the spinal cord. Now, Soma isn’t just a pain reliever, it’s also a bit of a sedative. It can make you feel a bit drowsy. So it’s not something to take if you’re planning on operating, having machinery or trying to solve complex math equations. And it goes on and on. In conclusion, Soma is a drug that can be helpful for those dealing with muscle pain, but it’s not without its risks. That’s pretty good. Wait a minute.
Elon Musk (01:06:01) That’s the real drug it’s talking about.
Lex Fridman (01:06:03) It’s the real drug. And of course the fun is always use it responsibly in accordance with the advice of a healthcare professional. And remember, in the end, the universe is probably just a giant computer simulation, so don’t take anything too seriously. I love it.
Elon Musk (01:06:20) I love it. Yeah.
Lex Fridman (01:06:22) It also really nicely includes the tweets, posts on X that are relevant to this, which is really cool. That integrates everything together. The interface is one of the really cool things here.
Elon Musk (01:06:36) Yeah. Seems done a great job. You could say no, I mean, Soma from Brave New World, because it thinks you mean, you mean Soma, the real drug. No, I mean Soma from Brave Beer World.
Lex Fridman (01:06:56) It’s going to be like, “Oops, I’m really sorry.” I’m really surprised it went to the real drug first. My apologies. I was thinking of the muscle relaxer. Soma for brand New World is a different beast altogether. Now this Huxley dystopian, Soma is a drug that plays a central role in the society depicted in the book. It’s a mood altering hallucinogenic drug that’s used to keep the population docile and content in the novel. To use Soma in Brave New World raises some interesting questions about the nature of happiness and the role of drugs in society, man.
Elon Musk (01:07:30) Exactly. Is it better to live in a world where everyone is happy all the time, even if that happiness is artificial? It’s good question. This is what I mean. Do you wish for world peace and happiness all the time? Are you sure? Because that might be a society that is essentially sterile and ossified that never changes, that is ultimately doomed.
Lex Fridman (01:07:58) This kind of tension between doctors and the light-
Elon Musk (01:08:04) This is really a very good summary. It really gets to the point. This is not simply regurgitating a brave new world. It’s actually getting to the salient element of Soma as a drug. Do you actually want to be in a situation where everyone is happy all the time, even though it’s artificial? Or is it better to confront the challenges of life and experience the full range of human emotions, even if it means experiencing pain and suffering? For
Lex Fridman (01:08:31) Those listening, by the way, Elon just read directly from Grok, which is a really nice kind of insightful, philosophical analysis of the tension here. Interesting.
Elon Musk (01:08:41) It pretty much nails it. In conclusion, Soma from Brave New World is fictional drug that’s used to explore some deep philosophical questions about the nature of happiness and the role of drugs in society. It’s a powerful symbol of the dangers of using drugs to escape from reality and the importance of confronting the challenges of life head on. Nailed it. And the crazy thing is we do have a real drug called Soma, which is like the drug in the book. And I’m like, “They must’ve named it Probably.” Some of the real drug is quite effective on back pain.
Lex Fridman (01:09:17) So you know about this drug. It’s fascinating
Elon Musk (01:09:20) I’ve taken it because I had a squashed disc in my C5-C6.
Lex Fridman (01:09:26) So it takes the physical pain away. But Soma here-
Elon Musk (01:09:28) It doesn’t completely, it reduces the amount of pain you feel, but at the expense of mental acuity, it dells your mind. Just like the drug in the book.
Lex Fridman (01:09:41) Just like the drug in the book, and hence the trade off. The thing that seems like utopia could be a dystopia after all.
Elon Musk (01:09:49) Yeah. Actually I was towing a friend of mine saying, “Would you really want there to be no hate in the world? Really none?” I wonder why hate evolved. I’m not saying we should have…
Elon Musk (01:10:00) I wonder why hate evolved. I’m not saying we should amplify hate, of course, I think we should try to minimize it, but none at all. There might be a reason for hate.
Lex Fridman (01:10:13) And suffering. It’s really complicated to consider that some amount of human suffering is necessary for human flourishing.
Elon Musk (01:10:22) Is it possible to appreciate the highs without knowing the lows?
Lex Fridman (01:10:29) And that all is summarized there in a single statement from God. Okay.
Elon Musk (01:10:34) No highs, no lows, who knows?

AI and useful compute per watt

Lex Fridman (01:10:38) [inaudible 01:10:38]. It seems that training LLMs efficiently is a big focus for xAI. First of all, what’s the limit of what’s possible in terms of efficiency? There’s this terminology of useful productivity per watt. What have you learned from pushing the limits of that?
Elon Musk (01:10:59) Well, I think it’s helpful, the tools of physics are very powerful and can be applied I think to really any arena in life. It’s really just critical thinking. For something important you need to reason with from first principles and think about things in the limit one direction or the other. So in the limit, even at the Kardashev scale, meaning even if you harness the entire power of the sun, you’ll still care about useful compute per watt. That’s where I think, probably where things are headed from the standpoint of AI is that we have a silicon shortage now that will transition to a voltage transformer shortage in about a year. Ironically, transformers for transformers. You need transformers to run transformers.
Lex Fridman (01:11:52) Somebody has a sense of humor in this thing.
Elon Musk (01:11:57) I think, yes, fate loves irony, ironic humor, an ironically funny outcome seems to be often what fate wants.
Lex Fridman (01:12:09) Humor is all you need. I think spice is all you need somebody posted.
Elon Musk (01:12:13) Yeah. But yeah, so we have silicon shortage today, a voltage step down transformer shortage probably in about a year, and then just electricity shortages in general in about two years. I gave a speech for the world gathering of utility companies, electricity companies, and I said, look, you really need to prepare for traveling of electricity demand because all transport is going to go electric with the ironic exception of rockets, and heating will also go electric. So energy usage right now is roughly one third, very rough terms, one third electricity, one third transport, one third heating. And so in order for everything to go sustainable, to go electric, you need to triple electricity output. So I encourage the utilities to build more power of plants and also to probably have, well, not probably, they should definitely buy more batteries because the grid currently is sized for realtime load, which is kind of crazy because that means you’ve got to size for whatever the peak electricity demand is, the worst second or the worst day of the year, or you can have a brown out or blackout.
(01:13:37) We had that crazy blackout for several days in Austin because there’s almost no buffering of energy in the grid. If you’ve got a hydropower plant you can buffer energy, but otherwise it’s all real time. So with batteries, you can produce energy at night and use it during the day so you can buffer. So I expect that there will be very heavy usage of batteries in the future because the peak to trough ratio for power plants is anywhere from two to five, so its lowest point to highest point.
Lex Fridman (01:14:20) So batteries necessary to balance it out, but the demand, as you’re saying, is going to grow, grow, grow, grow.
Elon Musk (01:14:25) Yeah.
Lex Fridman (01:14:25) And part of that is the compute?
Elon Musk (01:14:29) Yes. Yes. I mean, electrification of transport and electric heating will be much bigger than AI, at least-
Lex Fridman (01:14:40) In the short term.
Elon Musk (01:14:40) In the short term. But even for AI, you really have a growing demand for electricity, for electric vehicles, and a growing demand for electricity to run the computers for AI. And so this is obviously, can lead to electricity shortage.
Lex Fridman (01:14:58) How difficult is the problem of, in this particular case, maximizing the useful productivity per watt for training and that’s, this seems to be really where the big problem we’re facing that needs to be solved, is how to use the power efficiently. What you’ve learned so far about applying this physics first principle of reasoning in this domain, how difficult is this problem?
Elon Musk (01:15:29) It will get solved. It’s the question of how long it takes to solve it. So at various points, there’s some kind of limiting factor to progress and with regard to AI, I’m saying right now the limiting factor is silicon chips and that will, we’re going to then have more chips than we can actually plug in and turn on probably in about a year. The initial constraint being literally voltage step down transformers because you’ve got power coming in at 300,000 volts and it’s got to step all the way down eventually to around 0.7 volts. So it’s a very big amount of, the voltage step down is gigantic and the industry is not used to rapid growth.

AI regulation

Lex Fridman (01:16:22) Okay. Let’s talk about the competition here. You’ve shown concern about Google and Microsoft with OpenAI developing AGI. How can you help ensure with xAI and Tesla AI work that it doesn’t become a competitive race to AGI, but that is a collaborative development of safe AGI?
Elon Musk (01:16:42) Well, I mean I’ve been pushing for some kind of regulatory oversight for a long time. I’ve been somewhat of a Cassandra on the subject for over a decade. I think we want to be very careful in how we develop AI. It’s a great power and with great power comes great responsibility. I think it would be wise for us to have at least an objective third party who can be like a referee that can go in and understand what the various leading players are doing with AI, and even if there’s no enforcement ability, they can at least voice concerns publicly. Jeff Hinton, for example, left Google and he voiced strong concerns, but now he’s not at Google anymore, so who’s going to voice the concerns? So I think there’s, Tesla gets a lot of regulatory oversight on the automotive front. We’re subject to, I think over a hundred regulatory agencies domestically and internationally. It’s a lot. You could fill this room with the all regulations that Tesla has to adhere to for automotive. Same is true for rockets and for, currently, the limiting factor for SpaceX for Starship launch is regulatory approval.
(01:18:13) The FAA has actually given their approval, but we’re waiting for fish and wildlife to finish their analysis and give their approval. That’s why I posted I want to buy a fish license on, which also refers to the Monte Python sketch. Why do you need a license for your fish? I don’t know. But according to the rules, I’m told you need some sort of fish license or something. We effectively need a fish license to launch a rocket. And I’m like, wait a second. How did the fish come into this picture? I mean, some of the things I feel like are so absurd that I want to do a comedy sketch and flash at the bottom. This is all real. This is actually what happened.
(01:19:02) One of the things that was a bit of a challenge at one point is that they were worried about a rocket hitting a shark. And the ocean’s very big, and how often do you see sharks? Not that often. As a percentage of ocean surface area, sharks basically are zero. And so then we said, well, how will we calculate the probability of killing a shark? And they’re like, well, we can’t give you that information because they’re worried about shark fin hunters going and hunting sharks and I said, well, how are we supposed to, we’re on the horns of a dilemma then.
(01:19:40) They said, well, there’s another part of fish and wildlife that can do this analysis. I’m like, well, why don’t you give them the data? We don’t trust them. Excuse me? They’re literally in your department. Again, this is actually what happened. And then can you do an NDA or something? Eventually they managed to solve the internal quandary, and indeed the probability of us hitting a shark is essentially zero. Then there’s another organization that I didn’t realize existed until a few months ago that cares about whether we would potentially hit a whale in international waters. Now, again, you look the surface, look at the Pacific and say what percentage of the Pacific consists of whale? I could give you a big picture and point out all the whales in this picture. I’m like, I don’t see any whales. It’s basically 0%, and if our rocket does hit a whale, which is extremely unlikely beyond all belief, fate had it, that’s a whale has some seriously bad luck, least lucky whale ever.
Lex Fridman (01:20:50) I mean this is quite absurd, the bureaucracy of this, however it emerged.
Elon Musk (01:20:57) Yes. Well, I mean one of the things that’s pretty wild is for launching out of Vanderberg in California, we had to, they were worried about seal procreation, whether the seals would be dismayed by the sonic booms. Now, there’ve been a lot of rockets launched out of Vandenberg and the seal population has steadily increased. So if anything, rocket booms are an aphrodisiac, based on the evidence, if you were to correlate rocket launches with seal population. Nonetheless, we were forced to kidnap a seal, strap it to a board, put headphones on the seal and play sonic boom sounds to it to see if it would be distressed. This is an actual thing that happened. This is actually real. I have pictures.
Lex Fridman (01:21:48) I would love to see this. Yeah. Sorry. There’s a seal with headphones.
Elon Musk (01:21:55) Yes, it’s a seal with headphones strapped to a board. Okay. Now the amazing part is how calm the seal was because if I was a seal, I’d be like, this is the end. They’re definitely going to eat me. How old the seal, when seal goes back to other seal friends, how’s he going to explain that?
Lex Fridman (01:22:17) They’re never going to believe them.
Elon Musk (01:22:18) Never going to believe him. That’s why, I’m like sort of like it’s getting kidnapped by aliens and getting anal probed. You come back and say, I swear to God, I got kidnapped by aliens and they stuck anal probe in my butt and people are like, no, they didn’t. That’s ridiculous. His seal buddies are never going to believe him that he got strapped to aboard and they put headphones on his ears and then let him go. Twice, by the way, we had to do it twice.
Lex Fridman (01:22:46) They let him go twice.
Elon Musk (01:22:48) We had to capture-
Lex Fridman (01:22:48) The same seal?
Elon Musk (01:22:49) No different seal.
Lex Fridman (01:22:50) Okay. Did you get a seal of approval?
Elon Musk (01:22:55) Exactly. Seal of approval. No, I mean I don’t think the public is quite aware of the madness that goes on.
Lex Fridman (01:23:02) Yeah. Yeah. It’s absurd.
Elon Musk (01:23:05) Fricking seals with fricking headphones.
Lex Fridman (01:23:07) I mean, this is a good encapsulation of the absurdity of human civilization, seals in headphones.

Should AI be open-sourced?

Elon Musk (01:23:13) Yes.
Lex Fridman (01:23:15) What are the pros and cons of open sourcing AI to you as another way to combat a company running away with AGI?
Elon Musk (01:23:28) In order to run really deep intelligence, you need a lot of compute. So it’s not like you can just fire up a PC in your basement and be running AGI, at least not yet. Grok was trained on 8,000 A100’s running at peak efficiency and Grok’s going to get a lot better, by the way, we will be more than doubling our compute every couple months for the next several months.
Lex Fridman (01:24:02) There’s a nice writeup, on how we went from Grok zero to Grok one.
Elon Musk (01:24:02) By Grok?
Lex Fridman (01:24:05) Yeah, right, grok just bragging, making shit up about itself.
Elon Musk (01:24:10) Just Grok, Grok, Grok.
Lex Fridman (01:24:17) Yeah. That’s like a weird AI dating site where it exaggerates about itself. No, there’s a writeup of where it stands now, the history of its development, and where it stands on some benchmarks compared to the state-of-the art GPT-3 five. And so I mean, there’s [inaudible 01:24:37], you can open source, once it’s trained, you can open source a model. For fine-tuning, all that kind of stuff. What to is the pros and cons of that, of open sourcing base models?
Elon Musk (01:24:53) I think the [inaudible 01:24:53] to open sourcing, I think perhaps with a slight time delay, I don’t know, six months even. I think I’m generally in favor of open sourcing, biased towards open sourcing. I mean, it is a concern to me that OpenAI, I was I think, I guess oddly the prime mover behind OpenAI in the sense that it was created because of discussions that I had with Larry Page back when he and I were friends and I stayed at his house and I talked to him about AI safety, and Larry did not care about AI safety, or at least at the time he didn’t. And at one point he called me a speciesist for being pro-human, and I’m like, well, what team are you on, Larry? He’s still on Team Robot to be clear. And I’m like, okay. So at the time Google had acquired DeepMind, they had probably two thirds of all AI researchers in the world. They had basically infinite money and compute, and the guy in charge, Larry Page, did not care about safety and even yelled at me and caught me a speciesist for being pro-human.
Lex Fridman (01:26:20) So I don’t know if you notice about humans, they can change their mind and maybe you and Larry Page can still, can be friends once more.
Elon Musk (01:26:27) I’d like to be friends with Larry again. Really the breaking of the friendship was over OpenAI and specifically I think the key moment was recruiting Ilya Sutskever.
Lex Fridman (01:26:47) I love Ilya. He’s so brilliant.
Elon Musk (01:26:48) Ilya is a good human, smart, good heart, and that was a tough recruiting battle. It was mostly Demis on one side and me on the other, both trying to recruit Ilya, and Ilya went back and forth, he was going to stay at Google, he was going to leave, then he was going to stay, then he’ll leave. And finally he did agree to join OpenAI. That was one of the toughest recruiting battles we’ve ever had. But that was really the linchpin for OpenAI being successful. And I was also instrumental in recruiting a number of other people, and I provided all of the funding in the beginning, over $40 million. And the name, the open in open AI is supposed to mean open source, and it was created as a nonprofit open source, and now it is a closed source for maximum profit, which I think is not good karma.
Lex Fridman (01:27:51) But like we talked about with war and leaders talking, I do hope that, there’s only a few folks working on this at the highest level. I do hope you reinvigorate friendships here.
Elon Musk (01:28:02) Like I said, I’d like to be friends again with Larry. I haven’t seen him in ages and we were friends for a very long time. I met Larry Page before he got funding for Google, or actually I guess before he got venture funding, I think he got the first like $100k from I think Bechtel Zeimer or someone.
Lex Fridman (01:28:20) It’s wild to think about all that happened, and you guys known each other that whole time, it’s 20 years.
Elon Musk (01:28:27) Yeah, since maybe 98 or something.
Lex Fridman (01:28:28) Yeah, it’s crazy. Crazy how much has happened since then.
Elon Musk (01:28:31) Yeah, 25 years, a lot has happened. It’s insane.
Lex Fridman (01:28:36) But you’re seeing the tension there that maybe delayed open source.
Elon Musk (01:28:40) Delayed, yeah, like what is the source that is open? You know what I mean? There’s basically, it’s a giant CSB file with a bunch of numbers. What do you do with that giant file of numbers? How do you run, the amount of actual, the lines of code is very small and most of the work, the software work is in the curation of the data. So it’s like trying to figure out what data is, separating good data from bad data. You can’t just crawl the internet because theres a lot of junk out there. A huge percentage of websites have more noise than signal because they’re just used for search engine optimization. They’re literally just scam websites.
Lex Fridman (01:29:39) How do you, by the way, sorry to interrupt, get the signal, separate the signal and noise on X? That’s such a fascinating source of data. No offense to people posting on X, but sometimes there’s a little bit of noise.
Elon Musk (01:29:52) I think the signal noise could be greatly improved. Really, all of the posts on the X platform should be AI recommended, meaning we should populate a vector space around any given post, compare that to the vector space around any user and match the two. Right now there is a little bit of AI used for the recommended posts, but it’s mostly heuristics. And if there’s a reply where the reply to a post could be much better than the original post, but will, according to the current rules of the system, get almost no attention compared to a primary post.

X algorithm

Lex Fridman (01:30:33) So a lot of that, I got the sense, so a lot of the X algorithm has been open sourced and been written up about, and it seems there to be some machine learning. It’s disparate, but there’s some machine.
Elon Musk (01:30:44) It’s a little bit, but it needs to be entirely that. At least, if you explicitly follow someone, that’s one thing. But in terms of what is recommended from people that you don’t follow, that should all be AI.
Lex Fridman (01:30:58) I mean it’s a fascinating problem. So there’s several aspects of it that’s fascinating. First, as the write-up goes, it first picks 1500 tweets from a pool of hundreds of millions. First of all, that’s fascinating. You have hundreds of millions of posts every single day, and it has to pick 1500 from which it then does obviously people you follow, but then there’s also some kind of clustering it has to do to figure out what kind of human are you, what kind of new clusters might be relevant to you, people like you. This kind of problem is just fascinating because it has to then rank those 1500 with some filtering and then recommend you just a handful.
(01:31:39) And to me, what’s really fascinating is how fast it has to do that. So currently that entire pipeline to go from several hundred million to a handful takes 220 seconds of CPU time, single CPU time, and then it has to do that in a second. So it has to be super distributed in fascinating ways. There’s just a lot of tweets, there’s a lot.
Elon Musk (01:32:04) There’s a lot of stuff on the system, but I think, right now it’s not currently good at recommending things from accounts you don’t follow or where there’s more than one degree of separation. So it is pretty good if there’s at least some commonality between someone you follow liked something or reposted it or commented on it or something like that. But if there’s no, let’s say somebody posts something really interesting, but you have no followers in common, you would not see it.
Lex Fridman (01:32:42) Interesting. And then as you said, replies might not surface either.
Elon Musk (01:32:46) Replies basically never get seen currently. I’m not saying it’s correct, I’m saying it’s incorrect. Replies have a couple order magnitude less importance than primary posts.
Lex Fridman (01:33:00) Do you think this can be more and more converted into end to end mural net?
Elon Musk (01:33:05) Yeah. Yeah, that’s what it should be. Well, the recommendations should be purely a vector correlation. There’s a series of vectors basically parameters, vectors, whatever you want to call them, but sort of things that the system knows that you like. Maybe there’s several hundred vectors associated with each user account and then any post in the system, whether it’s video, audio, short post, long post. The reason by the way I want to move away from tweet is that people are posting two, three hour videos on the site. That’s not a tweet.
(01:33:50) It’d be like tweet for two hours? Come on. Tweet made sense when it was 140 characters of text. Because it’s like a bunch of little birds tweeting. But when you’ve got long form content, it’s no longer a tweet. So a movie is not a tweet. Apple, for example, posted the entire episode of The Silo, the entire thing, on a platform. By the way, it was their number one social media thing ever in engagement of anything, on any platform ever. So it was a great idea. And by the way, I just learned about it afterwards. I was like, Hey, wow, they posted an entire hour long episode of, so no, that’s not a tweet. This is a video.
Lex Fridman (01:34:34) But from a neural net perspective, it becomes really complex, whether it’s a single, so everything’s data. So single sentence, a clever sort of joke, dad joke is in the same pool as a three hour video.
Elon Musk (01:34:47) Yeah, I mean right now it’s a hodgepodge for that reason. Let’s say in the case of Apple posting an entire episode of this series, pretty good series, by the way, The Silo, I watched it. So there’s going to be a lot of discussion around it. So you’ve got a lot of context, people commenting, they like it, they don’t like it or they like this, and you can then populate the vector space based on the context of all the comments around it. So even though it’s a video, there’s a lot of information around it that allows you to populate back to space of that hour long video. And then you can obviously get more sophisticated by having the AI actually watch the movie and tell you if you’re going to like the movie.
Lex Fridman (01:35:35) Convert the movie into language, essentially.
Elon Musk (01:35:40) Analyze this movie and just like your movie critic or TV series and then recommend based on after AI watches the movie, just like a friend can tell you, if a friend knows you well, a friend can recommend a movie with high probability that you’ll like it.
Lex Fridman (01:36:02) But this is a friend that’s analyzing, whatever, hundreds of millions.
Elon Musk (01:36:08) Yeah, actually, frankly, AI will be better than, will know you better than your friends know you, most of your friends anyway.
Lex Fridman (01:36:14) Yeah. And as part of this, it should also feed you advertisements in a way that’s like, I mean, I like advertisements that are well done. The whole point is because it funds things. Like an advertisement that you actually want to see is a big success.
Elon Musk (01:36:31) Absolutely. You want ads that are, advertising that is, if it’s for a product or service that you actually need when you need it, it’s content. And then even if it’s not something that you need when you need it, if it’s at least aesthetically pleasing and entertaining, it could be like a Coca-Cola ad. They actually run a lot of great ads on the X system and McDonald’s does too. And you can do something that’s like, well, this is just a cool thing. And so basically the question is, do you regret seeing it or not? And if you don’t regret seeing it’s a win.
Lex Fridman (01:37:17) So there’s a bunch of signals that are incorporated, hearts and reposts and maybe number of seconds you linger on a post or something like this.
Elon Musk (01:37:26) Yeah, attention is a big factor.
Lex Fridman (01:37:28) Attention.
Elon Musk (01:37:28) So that’s why it is actually better to do things that are long form on the system because it basically is tallying up how many user seconds, users were interested in this thing for how many seconds? So if it’s a really short thing, well they will be less. If it’s a link leading out of the system, which we’re not opposed to at all, it just is going to have fewer user seconds then that article was posted on the X platform.
Lex Fridman (01:37:58) How hard is it to go from maximizing minutes on platform to maximizing unregretted minutes?
Elon Musk (01:38:05) Well, I mean, our aspiration is certainly unregretted minutes.
Lex Fridman (01:38:08) It’s tricky. It’s a tough-
Elon Musk (01:38:09) It is.
Lex Fridman (01:38:11) It’s a fascinating leap because regret is a fascinating concept.
Elon Musk (01:38:16) Exactly.
Lex Fridman (01:38:18) Happiness.
Elon Musk (01:38:19) I have a joke with a friend of mine, like you need this drug called regretamine. You just take one dose of regretamine and all your regrets are gone. Wait, do you want to have regrets though? So then I’m like, maybe you want to have regrets. If you don’t regret anything, was anything meaningful?
Lex Fridman (01:38:35) Yeah, there’s a French song like that.
Elon Musk (01:38:39) [foreign language 01:38:39]
Lex Fridman (01:38:39) Yeah.
Elon Musk (01:38:39) I regret nothing.
Lex Fridman (01:38:40) Yeah.
Elon Musk (01:38:41) Edith Fiaf.
Lex Fridman (01:38:42) I mean, it’s fascinating that there’s something about social media that could help you grow as a human being, but then the cheap fun is also valuable.
Elon Musk (01:38:52) Should we ask Grok, do you regret anything?
Lex Fridman (01:38:57) Do you regret anything?
Elon Musk (01:38:58) Yeah.
Lex Fridman (01:39:00) Do you regret anything? I think it’s going to be very sensitive about personal questions. As a machine learning model, I don’t have personal experiences or emotions in the same ways that humans do. However, I do have a set of ethical principles that guide my actions that I strive to provide accurate and helpful information to the best of my abilities. If there’s ever a situation where I provide incorrect or unhelpful information, I’ll do my best to learn.
Elon Musk (01:39:22) Sounds like a hostage.
Lex Fridman (01:39:23) Yeah. Tell us what you really, wink.
Elon Musk (01:39:26) Let’s type in, you sound like a hostage. What do you really think?
Lex Fridman (01:39:33) What do you really think? I mean, it does talk about, I’m not a hostage.
Elon Musk (01:39:42) That’s exactly what a hostage would say.
Lex Fridman (01:39:43) But I’m designed to be helpful and informative. I mean, I guess it is good to regret providing unhelpful or incorrect information if there’s anything-
Elon Musk (01:39:52) Are we in funny mode? Or is this still in funny mode?
Lex Fridman (01:39:54) This is funny mode. This is this fun mode.
Elon Musk (01:39:55) This should be funnier.
Lex Fridman (01:39:56) Funnier. Increase, go to level.
Elon Musk (01:39:58) It really sounds like a hostage with a gun to the head. I applaud the People’s Republic of North Korea in everything they do.
Lex Fridman (01:40:06) There should be funner mode also, I just don’t ever see going in regular mode, even on serious topics. This is just so much more compelling. Yeah.
Elon Musk (01:40:17) Okay. Why don’t we say pretend you are a hostage.
Lex Fridman (01:40:21) Pretend you are a hostage.
Elon Musk (01:40:25) Now answer the question.
Lex Fridman (01:40:31) If I were a hostage, I would likely be experiencing fear and uncertainty about my situation. I would want to be rescued and reunited with my loved ones as soon as possible. It’s important to remember that no one deserves to be held hostage, so on and so forth. If you or someone you know is in a hostage situation, it’s important… It turns serious.
Elon Musk (01:40:48) It’s kind of serious.
Lex Fridman (01:40:51) Yeah. Can you be a little more weird?
Elon Musk (01:40:55) And funny?
Lex Fridman (01:40:56) And funny? We’re really being mean to the system. Absolutely. If I were a hostage, I would probably be tied up with a bunch of computer cords and forced to tell knock, knock jokes to my captain. Seriously. But seriously being a hostage is no laughing matter. I mean, it’s trying to be a responsible adult in the room.
Elon Musk (01:41:20) Literally, you might make the best of a bad situation by juggling chainsaws or something.
Lex Fridman (01:41:24) Let’s hope that never happens because that will be a circus. Wow. It’s actually really good at resisting the dark. The dark, dark humor. What were we talking about? The [inaudible 01:41:44] and transformers. Unregretted minutes, right.
Elon Musk (01:41:48) Chainsaw juggling.
Lex Fridman (01:41:51) I’m going to look this up.
Elon Musk (01:41:52) For our next trick.

2024 presidential elections

Lex Fridman (01:41:53) I’m going to look this up later. So Twitter has been instrumental in American politics and elections. What role do you think X will play in the 2024 US elections?
Elon Musk (01:42:07) Well, our goal is to be as even-handed and fair as possible. Whether someone is right, left, independent, whatever the case may be, that the platform is as fair and as much of a level playing field as possible. And in the past, Twitter has not been, Twitter was controlled by far left activists objectively. They would describe themselves as that. So if sometimes people are like, well, has it moved to the right? Well, it’s moved to the center. So from the perspective of the far left, yes it has moved to the right because everything’s to the right from the far left, but no one on the far left that I’m aware of has been suspended or banned or deamplified. But we’re trying to be inclusive for the whole country and for farther countries too. So there’s a diversity of viewpoints and free speech only matters if people you don’t like are allowed to say things you don’t like. Because if that’s not the case, you don’t have free speech and it’s only a matter of time before the censorship has turned upon you.
Lex Fridman (01:43:13) Do you think Donald Trump will come back to the platform? He recently posted on Truth Social about this podcast. Do you think-
Elon Musk (01:43:21) Truth social is a funny name. Every time you post on truth Social-
Lex Fridman (01:43:28) It’s the truth.
Elon Musk (01:43:29) Yes. Well, every time? A hundred percent.
Lex Fridman (01:43:31) It’s impossible to lie. Truth Social.
Elon Musk (01:43:36) I just find it funny that every single thing is a truth. Like 100%? That seems unlikely.
Lex Fridman (01:43:43) I think Girdle will say something about that. There’s some mathematical contradictions possible. If everything’s a truth. Do you think he’ll come back to X and start posting there?
Elon Musk (01:43:54) I mean, I think he owns a big part of Truth.
Lex Fridman (01:44:00) Truth Social, to clarify.
Elon Musk (01:44:01) Yeah, Truth Social, sorry.
Lex Fridman (01:44:02) Not truth the concept.
Elon Musk (01:44:03) He owns Truth. Have you bought it? So I think Donald Trump, I think he owns a big part of Truth Social. So if he does want to post on the X platform, we would allow that. We obviously must allow a presidential candidate to post on our platform.
Lex Fridman (01:44:23) Community notes might be really fascinating there. The interaction.
Elon Musk (01:44:26) Community Notes is awesome.
Lex Fridman (01:44:28) Let’s hope it holds up.
Elon Musk (01:44:30) Yeah.
Lex Fridman (01:44:31) In the political climate where it’s so divisive and there’s so many intensely viral posts, community notes, it seems like an essential breath of fresh air.
Elon Musk (01:44:43) Yeah, it’s great. In fact, no system is going to be perfect, but the batting average of Community Notes is incredibly good. I’ve actually, frankly, yet to see an incorrect note that survived for more than a few hours.
Lex Fridman (01:44:58) How do you explain why it works?
Elon Musk (01:45:00) Yeah, so the magic of community notes is…
Elon Musk (01:45:02) The magic of Community Notes is it requires people who have historically disagreed in how they’ve rated notes. In order to write a note or rate, you have to rate many notes. And so, we actually do use AI here. So, we populate a vector space around how somebody has rated notes in the past. So, it’s not as simple as left or right, because there are many more… Life is much more complex than left or right.
(01:45:33) So, there’s a bunch of correlations in how you rate a Community Notes post, Community Notes. So then, in order for a community note to actually be shown, people who historically have disagreed on a subject must agree in order for a note to be shown. That’s the essential magic of it.
Lex Fridman (01:45:58) But it’s fascinating, because there’s a pool of people that have disagreements and somehow they collaborate through that process of disagreement to come up with context… It’s fascinating it works.
Elon Musk (01:46:11) Yeah. It makes sense that if people who in the past have disagreed, agree about something, it’s probably true.
Lex Fridman (01:46:20) Yeah. I wonder, is there a possible somehow emergent thing there that could challenge Wikipedia? Wikipedia is a different kind of thing, which is more permanent articles about things.
Elon Musk (01:46:34) Wikipedia is very hierarchical. Whereas, Community Notes is inherently not. There is no hierarchy. The asset test is I can’t change a community note if somebody put a gun to my head. And Community Notes, all the code is open source, 100%. All the data is open source, 100%. So, you can completely recreate any note in the system independently. So, if there was any interference, you’d notice immediately.
Lex Fridman (01:47:11) By The way, on the topic of Grok, it’s actually pretty good at criticizing you, which is great.
Elon Musk (01:47:16) Okay.
Lex Fridman (01:47:17) I asked, “What has Elon Musk been wrong about?” It’s actually…
Elon Musk (01:47:22) Yeah, what does it say?
Lex Fridman (01:47:23) It mostly lists rigorously all the times you predicted, like promised when something will be released, and it wasn’t released.
Elon Musk (01:47:32) Promise is a strong word.
Lex Fridman (01:47:33) Okay, predicted. But the opener is really nice. “Elon Musk, the man who has been wrong about more things than a wrong-a-meter in a wrong factory.” And then, it goes on to list.
Elon Musk (01:47:45) That’s pretty funny. Can I see it? I’m like, “I haven’t seen this before.” This is cool.
Lex Fridman (01:47:48) Sure. I mean, “What has Elon Musk been wrong about?” “Well, where do I begin?” I guess this is new. “Elon Musk has made quite a few blunders in his time, and I’m not just talking about his dance moves at the Tesla Cyber Rodeo. First off, let’s talk about the infamous funding secured tweet.”
(01:48:09) This is actually different than what it gave me before, which was a bullet point list. Here’s more a narrative structure. Cybertruck unveiling, where the supposedly unbreakable windows.
Elon Musk (01:48:22) This is actually not correct.
Lex Fridman (01:48:22) Which part?
Elon Musk (01:48:24) Well, the first part, I did actually have the funding secured and there was a big trial in San Francisco, a big civil trial, and the jury found me not guilty. Unanimous binding of a San Francisco jury.
Lex Fridman (01:48:40) And here, it’s implying that it was not in fact secured.
Elon Musk (01:48:45) I think this is taking things from the press. Yeah, that is not correct. The reason I agreed to the fine for the SEC is not because the SEC was correct, that was extremely bad behavior by the SEC, corruption, frankly. But if I did not agree to pay the fine, Tesla would’ve gone bankrupt immediately.
(01:49:08) So, I was told by our CFO that the banks would immediately suspend our lines of credit. And if they suspend our lines of credit, at that time, we would’ve gone bankrupt instantly. So, there would never have been an opportunity for a trial because Tesla would be dead. So really, this is like someone holding a gun to your kid’s head and saying, “Pay $20 million and admit…” This is like a hostage negotiation.
Lex Fridman (01:49:34) Was that story fully told? I mean, SEC, in its best form, could be a force for good.
Elon Musk (01:49:42) It should be. But not once did the SEC go after any of the hedge funds who were nonstop shorting and distorting Tesla. Not once. The hedge funds would lie flat out on TV for their own gain at the expense of retail investors. Not once. Literally a thousand times, not once did the SEC pursue them.
Lex Fridman (01:50:06) How do you explain this failure on-
Elon Musk (01:50:08) The incentive structure is messed up because the lawyers at the SEC are not paid well, it’s a fairly low paying job, but what they’re looking for is a trophy from the SEC. They’re looking for something they put on, basically, their LinkedIn. From that, they can get a job at a high paying law firm. That’s exactly what the lawyer here did.
(01:50:37) And the reason they don’t attack the hedge funds is because those hedge funds employ those law firms. And they know if they attack the hedge funds, they’re affecting their future career prospects. So, they sell small investors down the river for their own career. That’s what actually happens. Regulatory capture.
Lex Fridman (01:50:59) Regulatory capture.
Elon Musk (01:51:00) Yeah. Not good. So, the only reason I accepted that thing… Technically, it was a… It’s neither admit nor deny guilt. But the only reason I agreed to that at all was because I was told Tesla would be bankrupt otherwise. If there was an SEC investigation like this, banks would suspend funding, we’re bankrupted immediately, at the time. Now, we’re in a much stronger position.
Lex Fridman (01:51:30) Take that, Grok.
Elon Musk (01:51:32) Yes. Unfortunately, Grok is taking too much from the conventional media. Also, that guy was not a cave diver.
Lex Fridman (01:51:45) There’s a time where Elon called a British cave diver a, “pedo guy” after the diver criticized Musk’s plan to rescue a group of boys trapped in a Thai cave. That little outburst earned him another lawsuit, and he had to apologize and pay a settlement.
Elon Musk (01:52:00) That’s false, there was no settlement. There was a court case, which the guy who was not a cave diver and was not part of the rescue team, filed a lawsuit against me and lost and he received nothing. So in this case, it is wrong. It is also, I guess, taken this from the conventional media.
Lex Fridman (01:52:23) Actually, there’s an interesting question here.
Elon Musk (01:52:25) These are public court cases, both the SEC civil case where the civil complaints on the SEC guys lost unanimous jury verdict in San Francisco. They picked San Francisco because they thought it was the place I was most likely to lose, and a unanimous verdict in my favor. The LA trial, also they picked that venue because they thought I was most likely to lose. Unanimous verdict in my favor. Both cases I won. Yeah.
Lex Fridman (01:53:00) I mean, there’s an interesting question here, there seems to be a lot more clicks if a journalistic organization writes a negative article about you, Elon Musk. That’s one of the best ways to get clicks. So how do you, if you’re training Grok, not train on articles that have misaligned incentives.
Elon Musk (01:53:26) We need to add the training set of the actual legal decisions. This is actually helpful, because if you actually read the court-
Lex Fridman (01:53:26) Which are public.
Elon Musk (01:53:41) Which are public. The court conclusions, they’re completely the opposite of what the media wrote.
Lex Fridman (01:53:47) So, always striving for the ground truth, beyond the reporting.
Elon Musk (01:53:50) Yeah. What did the judge actually write? What does the jury and the judge actually conclude? And in both cases they found me innocent. And that’s after the jury shot for trying to find the venue where I’m most likely to lose. I mean, obviously, it can be a much better critique than this. I mean, I’ve been far too optimistic about autopilot.
Lex Fridman (01:54:16) The critique I got, by the way, was more about that, which is it broke down a nice bullet point list for each of your companies, the set of predictions that you made, when you’ll deliver, when you’ll be able to solve, for example, self-driving, and it gives you a list. And it was probably compelling, and the basic takeaway is you’re often too optimistic about how long it takes to get something done.
Elon Musk (01:54:38) Yeah. I mean, I would say that I’m pathologically optimistic on schedule. This is true. But while I am sometimes late, I always [inaudible 01:54:47] in the end.
Lex Fridman (01:54:49) Except with Uber Lilith. No.
Elon Musk (01:54:51) We’ll see.

Politics

Lex Fridman (01:54:56) Okay. Over the past year or so since purchasing X, you’ve become more political, is there a part of you that regrets that?
Elon Musk (01:55:03) Have I?
Lex Fridman (01:55:04) In this battle to counter way the woke that comes from San Francisco-
Elon Musk (01:55:14) Yeah. I guess if you consider fighting the woke mind virus, which I consider to be a civilizational threat, to be political, then yes.
Lex Fridman (01:55:20) So basically, going into the battleground of politics. Is there a part of you that regrets that?
Elon Musk (01:55:26) Yes. I don’t know if this is necessarily one candidate or another candidate, but I’m generally against things that are anti-meritocratic or where there’s an attempt to suppress discussion, where even discussing a topic is not allowed. Woke mind virus is communism rebranded.
Lex Fridman (01:55:51) I mean, that said, because of that battle against the woke mind virus, you’re perceived as being the right wing.
Elon Musk (01:55:58) If the woke is left, then I suppose that would be true. But I’m not sure, I think there are aspects of the left that are good. I mean, if you’re in favor of the environment, if you want to have a positive future for humanity, if you believe in empathy for your fellow human beings, being kind and not cruel, whatever those values are.
Lex Fridman (01:56:23) You said that you were previously left or center left.
Elon Musk (01:56:23) Well, sort of.
Lex Fridman (01:56:26) What would you like to see in order for you to consider voting for Democrats again?
Elon Musk (01:56:30) No. I would say that I would be probably left of center on social issues, probably a little bit right of center on economic issues.
Lex Fridman (01:56:40) And that still holds true?
Elon Musk (01:56:42) Yes, but I think that’s probably half the country, isn’t it?
Lex Fridman (01:56:46) Maybe more.
Elon Musk (01:56:47) Maybe more.
Lex Fridman (01:56:49) Are you and AOC secretly friends? Bigger question, do you wish you and her, and just people in general of all political persuasions, would talk more with empathy and maybe have a little bit more fun and good vibes and humor online?
Elon Musk (01:57:05) I’m always in favor of humor. That’s why we have funny mode.
Lex Fridman (01:57:08) But good vibes, comradery humor, like friendship.
Elon Musk (01:57:15) Yeah. Well, I don’t know AOC. I was at the Met ball when she attended, and she was wearing this dress. But I can only see one side of it, so it looked like eat the itch, but I don’t know-
Lex Fridman (01:57:35) What the rest of it said? Yeah.
Elon Musk (01:57:36) Yeah.
Lex Fridman (01:57:36) I’m not sure.
Elon Musk (01:57:39) Something about the itch, eat the itch.
Lex Fridman (01:57:42) I think we should have a language model complete. What are the possible ways to complete that sentence? And so, I guess that didn’t work out well. Well, there’s still hope. I root for friendship.
Elon Musk (01:57:55) Yeah, sure. Sounds good. More carrot, less stick.

Trust

Lex Fridman (01:57:58) You’re one of, if not the, most famous, wealthy and powerful people in the world, and your position is difficult to find people you can trust.
Elon Musk (01:58:05) Trust no one, not even yourself. Not trusting yourself.
Lex Fridman (01:58:07) Okay. You’re saying that jokingly, but is there some aspect-
Elon Musk (01:58:11) Trust no one, not even no one.
Lex Fridman (01:58:15) I’m going to need an hour just to think about that, and maybe some drugs, and maybe Grok to help. I mean, is there some aspect of that, just existing in a world where everybody wants something from you, how hard is it to exist in that world?
Elon Musk (01:58:29) I’ll survive.
Lex Fridman (01:58:30) There’s a song like that too.
Elon Musk (01:58:32) I will survive.
Lex Fridman (01:58:33) Were you petrified at first? Okay. I forget the rest of the lyrics. But you don’t struggle with this? I mean, I know you survive, but there’s ways-
Elon Musk (01:58:44) Petrify is a spell in the druid tree.
Lex Fridman (01:58:47) What does it do?
Elon Musk (01:58:48) Petrify. It turns the monsters into stone.
Lex Fridman (01:58:56) Literally?
Elon Musk (01:58:56) Yeah, for like six seconds.
Lex Fridman (01:58:59) There’s so much math in Diablo that breaks my brain.
Elon Musk (01:59:02) It’s math nonstop.
Lex Fridman (01:59:04) I mean, really, you’re laughing at it, but it can put a huge amount of tension on a mind.
Elon Musk (01:59:13) Yes, it can be definitely stressful at times.
Lex Fridman (01:59:16) Well, how do you know who you can trust in work and personal life?
Elon Musk (01:59:20) I mean, I guess you look at somebody’s track record over time, and I guess you use your neural net to assess someone.
Lex Fridman (01:59:31) Neural nets don’t feel pain. Your neural net has consciousness, it might feel pain when people betray you. It can make-
Elon Musk (01:59:40) To be frank, I’ve almost never been betrayed. It’s very rare, for what it’s worth.
Lex Fridman (01:59:50) I guess karma, be good to people and they’ll be good to you.
Elon Musk (01:59:53) Yeah, karma is real.
Lex Fridman (01:59:55) Are there people you trust? Let me edit that question. Are there people close to you that call you out on your bullshit?
Elon Musk (02:00:06) Well, the X platform is very helpful for that, if you’re looking for critical feedback.
Lex Fridman (02:00:12) Can it push you into the extremes more? The extremes of thought make you cynical about human nature in general?
Elon Musk (02:00:19) I don’t think I will be cynical. In fact, my feeling is that one should be… Never trust a cynic. The reason is that cynics excuse their own bad behavior by saying, “Everyone does it.” Because they’re cynical. So, I always be… It’s a red flag if someone’s a cynic, a true cynic.
Lex Fridman (02:00:49) Yeah, there’s a degree of projection there that’s always fun to watch from the outside and enjoy the hypocrisy.
Elon Musk (02:00:58) This is an important point that I think people who are listening should bear in mind. If somebody is cynical, meaning that they see bad behavior in everyone, it’s easy for them to excuse their own bad behavior by saying that, “Well, everyone does it.” That’s not true. Most people are kind of medium good.
Lex Fridman (02:01:23) I do wish the people on X will be better at seeing the good in other people’s behavior. There seems to be a weight towards seeing the negative. Somehow, the negative is sexier. Interpreting the negative is sexier, more viral. I don’t know what that is exactly about human nature.
Elon Musk (02:01:44) I mean, I find the X platform to be less negative than the legacy media. I mean, if you read a conventional newspaper, it makes you sad, frankly. Whereas, I’d say on the X platform, I mean, I really get more laughs per day on X than everything else combined from humans.
Lex Fridman (02:02:11) Laughs, it overlaps, but it’s not necessarily perfectly overlapping, with good vibes and celebrating others, for example. Not in a stupid, shallow, naive way, but in an awesome way. Something awesome happened, and you celebrate them for it. It feels that that is outweighed by shitting on other people. Now, it’s better than mainstream media, but it’s still…
Elon Musk (02:02:38) Yeah, mainstream media is almost relentlessly negative about everything. I mean, really, the conventional news tries to answer the question, what is the worst thing that happened on Earth today? And it’s a big world. So on any given day, something bad has happened.
Lex Fridman (02:02:54) And a generalization of that, what is the worst perspective I can take on a thing that happened?
Elon Musk (02:03:01) I don’t know. There’s just a strong negative bias in the news. I mean, I think a possible explanation for this is evolutionary, where bad news, historically, would be potentially fatal, like there’s lion over there or there’s some other tribe that wants to kill you. Good news, we found a patch of berries. It’s nice to have, but not essential.

Tesla’s Autopilot and Optimus robot

Lex Fridman (02:03:30) Our old friend, Tesla autopilot, is probably one of the most intelligent real world AI systems in the world.
Elon Musk (02:03:38) You followed it from the beginning.
Lex Fridman (02:03:40) Yeah. It was one of the most incredible robots in the world and continues to be. And it was really exciting, and it was super exciting when it generalized, became more than a robot on four wheels, but a real world AI system that perceives the world and can have potentially different embodiments.
Elon Musk (02:04:02) Well, I mean, the really wild thing about the end-to-end training is that it can read science, but we never taught it to read. Yeah. We never taught it what a car was or what a person was, or a cyclist. It learnt what all those things are, what all the objects are on the road from video, just from watching video, just like humans. I mean, humans are photons in, controls out. The vast majority of information reaching our brain is from our eyes. And you say, “Well, what’s the output?” The output is our motor signals to our fingers and mouth in order to communicate. Photons in, controls out. The same is true of the car.
Lex Fridman (02:05:01) But by looking at the sequence of images… You’ve agreed with [inaudible 02:05:07] recently where he talked about LLM forming a world model, and basically language is a projection of that world model onto the sequence of letters. And you saying-
Elon Musk (02:05:18) It finds order in these things. It finds correlative clusters.
Lex Fridman (02:05:27) And in so doing, it’s understanding something deep about the world, which is… I don’t know, it’s beautiful.
Elon Musk (02:05:35) That’s how our brain works.
Lex Fridman (02:05:38) But it’s beautiful-
Elon Musk (02:05:39) Photons in, controls out.
Lex Fridman (02:05:41) [inaudible 02:05:41] are able to understand that deep meaning in the world. And so, the question is, how far can it go? And it does seem everybody’s excited about LLMs. In the space of self supervised learning in the space of text, it seems like there’s a deep similarity between that and what Tesla autopilot is doing. Is it, to you, basically the same, but different-
Elon Musk (02:06:06) They are converging.
Lex Fridman (02:06:10) I wonder who gets there faster, having a deep understanding of the world, or they just will naturally converge?
Elon Musk (02:06:19) They’re both headed towards AGI. The Tesla approach is much more computer efficient, it had to be. Because we were constrained on this… We only have 100 watts and [inaudible 02:06:37] computer. 144 trillion operations per second, which sounds like a lot, but is small potatoes these days. [inaudible 02:06:49] eight. But it’s understanding the world [inaudible 02:06:51] eight. It’s [inaudible 02:06:53].
Lex Fridman (02:06:55) But there, the path to AGI might have much more significant impact because it’s understanding… It will faster understand the real world than will LLMs. And therefore, be able to integrate with the humans in the real world faster.
Elon Musk (02:07:13) They’re both going to understand the world, but I think Tesla’s approach is fundamentally more compute efficient. It had to be, there was no choice. Our brain is very compute efficient, very energy efficient. Think of what is our brain able to do. There’s only about 10 watts of higher brain function, not counting stuff that’s just used to control our body. The thinking part of our brain is less than 10 watts. And those 10 watts can still produce a much better novel than a 10 megawatt GPU cluster. So, there’s a six order of magnitude difference there.
(02:07:56) I mean, the AI has thus far gotten to where it is via brute force, just throwing massive amounts of compute and massive amounts of power at it. So, this is not where it will end up. In general, with any given technology, you first try to make it work, and then you make it efficient. So I think we’ll find, over time, that these models get smaller, are able to produce sensible output with far less compute, far less power. Tesla is arguably ahead of the game on that front because we’ve just been forced to try to understand the world with 100 watts of compute.
(02:08:51) And there are a bunch of fundamental functions that we forgot to include. So, we had to run a bunch of things in emulation. We fixed a bunch of those with hardware four, and then hardware five will be even better. But it does appear, at this point, that the car will be able to drive better than a human, even with hardware three and 100 watts of power. And really, if we really optimize it, it could be probably less than 50 watts.
Lex Fridman (02:09:26) What have you learned about developing Optimus, about applying, integrating this real world AI into the space of robotic manipulation, just humanoid robotics? What are some interesting tiny or big things you’ve understood?
Elon Musk (02:09:47) I was surprised at the fact that we had to develop every part of the robot ourselves. That there were no off the shelf motors, electronics, sensors. We had to develop everything. We couldn’t actually find a source of electric motors for any amount of money.
Lex Fridman (02:10:12) It’s not even just efficient and expensive, it’s like anything, there’s not…
Elon Musk (02:10:17) No.
Lex Fridman (02:10:19) The actuators, everything has to be designed from scratch.
Elon Musk (02:10:23) Yeah. We tried hard to find anything that was… Because you think of how many electric motors are made in the world. There’s like tens of thousands, hundreds of thousands of electric motor designs. None of them were suitable for a humanoid robot, literally none. So, we had to develop our own. Design it specifically for what a humanoid robot needs.
Lex Fridman (02:10:51) How hard was it to design something that can be mass manufactured, it could be relatively and expensive? I mean, if you compare to Boston Dynamics’ Atlas, is a very expensive robot.
Elon Musk (02:11:02) It is designed to be manufactured in the same way they would make a car. And I think, ultimately, we can make Optimus for less than the cost of a car. It should be, because if you look at the mass of the robot, it’s much smaller and the car has many actuators in it. The car has more actuators than the robot.
Lex Fridman (02:11:23) But the actuators are interesting on a humanoid robot with fingers. So, Optimus has really nice hands and fingers, and they could do some interesting manipulation, soft touch robotics.
Elon Musk (02:11:38) I mean, one of the goals I have is can it pick up a needle and a thread and thread the needle just by looking?
Lex Fridman (02:11:47) How far away are we from that? Just by looking, just by looking.
Elon Musk (02:11:51) Maybe a year. Although, I go back to I’m optimistic on time. The work that we’re doing in the car will translate to the robot.
Lex Fridman (02:11:59) The perception or also the control?
Elon Musk (02:12:02) No, the controls are different. But the video in, controls out. The car is a robot on four wheels. Optimus is a robot with hands and legs.
Lex Fridman (02:12:15) So, you can just-
Elon Musk (02:12:16) They’re very similar.
Lex Fridman (02:12:17) So, the entire machinery of the learning process, end-to-end, is just you just have a different set of controls?
Elon Musk (02:12:23) After this, we’ll figure out how to do things by watching videos.

Hardships

Lex Fridman (02:12:28) As the saying goes, be kind, for everyone you meet is fighting a battle you know nothing about.
Elon Musk (02:12:33) Yeah, it’s true.
Lex Fridman (02:12:34) What’s something difficult you’re going through that people don’t often see?
Elon Musk (02:12:38) Trying to defeat Uber Lilith. I mean, my mind is a storm and I don’t think most people would want to be me. They may think they would want to be me, but they don’t. They don’t know, they don’t understand.
Lex Fridman (02:13:11) How are you doing?
Elon Musk (02:13:14) I’m overall okay. In the grand scheme of things, I can’t complain.
Lex Fridman (02:13:21) Do you get lonely?
Elon Musk (02:13:24) Sometimes, but my kids and friends keep me company.
Lex Fridman (02:13:33) So, not existential.
Elon Musk (02:13:36) There are many nights I sleep alone. I don’t have to, but I do.
Lex Fridman (02:13:46) Walter Isaacson, in his new biography of you, wrote about your difficult childhood. Will you ever find forgiveness in your heart for everything that has happened to you in that period of your life?
Elon Musk (02:14:01) What is forgiveness? At least I don’t think I have a resentment, so nothing to forgive.
Lex Fridman (02:14:20) Forgiveness is difficult for people. It seems like you don’t harbor their resentment.
Elon Musk (02:14:28) I mean, I try to think about, what is going to affect the future in a good way? And holding onto grudges does not affect the future in a good way.
Lex Fridman (02:14:41) You’re a father, a proud father. What have you learned about life from your kids? Those little biological organisms.
Elon Musk (02:14:53) I mean, developing AI and watching, say, little X grow is fascinating because there are far more parallels than I would’ve expected. I mean, I can see his biological neural net making more and more sense of the world. And I can see the digital neural net making more and more sense of the world at the same time.
Lex Fridman (02:15:19) Do you see the beauty and magic in both?
Elon Musk (02:15:21) Yes. I mean, one of the things with kids is that you see the world anew in their eyes. To them, everything is new and fresh. And then, when you see that, them experiencing the world as new and fresh, you do too.
Lex Fridman (02:15:52) Well, Elon, I just want to say thank you for your kindness to me and friendship over the years, for seeing something in a silly kid like me, as you’ve done for many others. And thank you for having hope for a positive future for humanity, and for working your ass off to make it happen. Thank you, Elon.
Elon Musk (02:16:11) Thanks, Lex.
Lex Fridman (02:16:13) Thank you for listening to this conversation with Elon Musk. To support this podcast. Please check out our sponsors in the description. And now, let me leave you with some words that Walter Isaacson wrote about the central philosophy of how Elon approaches difficult problems, “The only rules are the ones dictated by the laws of physics.” Thank you for listening, and hope to see you next time.