Author Archives: Lex Fridman

Transcript for Javier Milei: President of Argentina – Freedom, Economics, and Corruption | Lex Fridman Podcast #453

This is a transcript of Lex Fridman Podcast #453 with Javier Milei.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Introduction

Javier Milei
(00:00:00)
So what is the difference between a madman and a genius? Success.
Lex Fridman
(00:00:10)
The following is a conversation with Javier Milei, the president of Argentina. He is a libertarian, anarcho-capitalist, and economist, who campaigned with a chainsaw that symbolized his promise to slash the corrupt bureaucracy of the state. He stepped into the presidency one year ago, with a country on the brink of hyperinflation, deepened debt and suffering from mass unemployment and poverty. He took this crisis head on, transforming one of Latin America’s largest economies through pure free market principles. In just a few months in office, he already achieved Argentina’s first fiscal surplus in 16 years, and not just avoided the hyperinflation but brought inflation down to its lowest in three years.

(00:01:02)
We discuss all of this in detail, both the successes and the challenges. His depth of knowledge of economic principles, metrics and data was truly impressive and refreshing to hear from a world leader. But even bigger than the economic transformation of Argentina, Javier represents the universal fight against government corruption and the fight for freedom, economic freedom, political freedom, and freedom of speech. He has many critics, many of whom a part of the corrupt establishment he’s seeking to dismantle, but many are simply Argentinian citizens, scared of the pain his radical policies may bring, at least in the short term. But whether one disagrees with his methods or not, no one can deny that his presidency marks one of the most ambitious attempts at economic transformation in modern history, and that Javier Milei is truly a force of nature, combining the rigor of an economist with the passion of a revolutionary in the fight for freedom of a nation he loves. Argentina is one of my favorite countries, so I sincerely hope he succeeds.

(00:02:13)
This interview was conducted with the President speaking Spanish and me speaking English with an interpreter simultaneously translating. We make the episode available overdubbed and subtitled in both English and Spanish, thanks to our great friends at ElevenLabs. If you’re watching on YouTube, you can switch between English and Spanish by clicking the gear icon, selecting audio track, and then choosing the language. Same with the captions. If you’re watching on X, I’ll post both Spanish and English versions separately. If you’re watching on Spotify or listening elsewhere, I’ll probably only post the English version. This is a first time for me doing something like this in a foreign language. It was challenging, but illuminating. I hope to talking to many world leaders for two to three hours in this way, including Volodymyr Zelenskyy, Vladimir Putin, Narendra Modi, and Xi Jinping. I want to explore who they are, how they think, and how they hope to help their country and humanity flourish. This is the Lex Friedman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Javier Milei.

Economic freedom

Lex Fridman
(00:03:27)
When did you first understand the value of freedom, especially economic freedom?
Javier Milei
(00:03:33)
Well, actually, I came to understand the ideas of freedom as an economic growth specialist back in the years of 2013 to 2014. I could see that per capita GDP statistics over the last 2,000 years of the Christian era essentially looked like a hockey stick, indicating that per capita GDP remained almost constant until around 1800, after which it accelerated sharply. In the same context of that phenomenal increase in productivity and per capita GDP, the population had multiplied sevenfold over the preceding 200 years.

(00:04:20)
So basically, in economics, that means you get increasing returns, and the presence of increasing returns implies the existence of monopolies, concentrated structures, and according to traditional neoclassical economic theory, the presence of monopolies and concentrated structures is not a good thing. But at the same time, one could see that living standards had increased tremendously and that middle-income people ended up living far better than emperors did in the Roman era, and the population had gone from having 95% of people in extreme poverty to less than 10%. And in that context, the question was, how it could be that something that had lifted so many people out of poverty, that had improved human conditions so much, could be something bad for economic theory, meaning something was not right.

(00:05:20)
So in that context, I remember that one of the people who worked on my team suggested I read an article by Murray Newton Rothbard called Monopoly and Competition. I remember reading it like it was today, and after reading it carefully, I said, “Everything I’ve taught about market structure in the last 20 years in courses on microeconomics is wrong.” This caused a very strong internal commotion in me. So I called this person who used work with me, and they recommended a place to buy Austrian School of Economics books, and I remember I bought at least 20 or 30 books, which I went to pick up one Saturday afternoon. And when I visited the bookstore, I was fascinated by all the stuff they had there.

(00:06:18)
So I went back the next day and I started calculating how much money I needed to pay for my dog’s food. That’s my four-legged child, and how much I needed to spend on the taxi fare and food. And then with what I have left, I spent all of it on more books. And then I started to read very intensively, and I remember for example, the experience of reading Human Action by Mises, and this was a book that I didn’t know about. And I remember that on the following weekend, I started to read this book right from the first page, and I didn’t stop until I finished it, and that was a true revolution in my head. And having the chance to read Austrian authors like Rothbard, Mises, Hayek, Hoppe and Jesus Huerta de Soto, or others like Juan Ramon Rallo, Philipp Bagus and Walter Block, for example.

(00:07:27)
That was very inspirational, and at one point I got the opportunity to read related to the works of Alberto Benegas Lynch [foreign language 00:07:38], and I also had the pleasure and honor to meet him. And today we are actually friends. So that paved the way for me to approach the ideas of freedom. And another book that was a very significant influence and impact on me was the Principles of Political Economics by Menger. It was truly eye-opening, or let’s say, for reading Eugen von Böhm-Bawerk, these were things that really challenged all of my former thinking. I had a vague idea and poor about the Austrian School. The only thing I had read about the Austrian School until then had been Money and Time, a very good book by Garrison. But now that I understand a little bit more about Austrian economics, I know that it was rather poor. This doesn’t mean that the book isn’t good, but there were a whole lot of things to read that ended up being truly fascinating.

Anarcho-capitalism

Lex Fridman
(00:08:52)
So from that, what is now, today, and maybe you can talk about the evolution, is your philosophy, economics philosophy. You’ve described yourself as an anarcho-capitalist, market anarchists, libertarian. That’s the ideal, and then maybe in practice and reality, you’ve said that you’re more of a minarchist. So lay it all out. What’s your economics philosophy today?
Javier Milei
(00:09:19)
Strictly speaking, I am an anarcho-capitalist. I despise the state government. I despise violence. Let us suppose we take the definition of liberalism. I usually use the definition of liberalism given by Alberto Benegas Lynch [foreign language 00:09:37], which is very much in line with the definition of John Locke, which essentially matches the definition by Alberto Benegas Lynch, Jr., who said that liberalism is the unrestricted respect for the life project of others based on the principle of non-aggression and in defense of the right to life, liberty, and property. So I frame all of the discussions within those terms. And the fact is that when you get to that notion, I would dare say that you become an anarcho-capitalist de facto. And what that describes, it is an idea which represents my ideal world. I mean, that is the ideal world.

(00:10:23)
Now, real life poses a whole lot of restraints, and some of those you can lift, and those restrictions and others you can’t. So in real life, I am a minarchist. I advocate for minimizing state size. I try to remove as many regulations as possible. In fact, that is what I used to say during my campaign, and let’s say, that is what I’m now carrying out. We have just carried out the largest structural reform in Argentine history. It is a structural reform that is eight times larger than Menem’s, which had been the largest structural reform in history. And we did that with 15% of the representatives and 10% of the senators. Furthermore, we have a deregulation ministry where basically every day we eliminate between one and five regulations. On the other hand, we have 3,200 additional structural reforms pending, to the point that the day we finish all these reforms, we will be the freest country on the planet, with the consequences they have in terms of well-being. Think about this, when Ireland started market reforms just over 40 years ago, it was the poorest country in Europe. Today, its GDP per capita is 50% higher than that of the United States. So I have a current situation, and what I am constantly looking for, whether from my academic works and my outreach notes and books, is the world we have today, that every day we are closer, that every day we gain more freedom because there are some very interesting things here. First, I would like to quote Milton Friedman. There is a moment when they do an interview with Milton Friedman and they ask him about liberals, and then he says that there are three types of liberals. There are the classical liberals where, for example, Adam Smith or Milton Friedman himself could fit. Some say that Hayek could fit into that category. For me, Hayek is a minarchist.

(00:12:38)
Then you have the minarchists where you could clearly find in that place Mises, Hayek. One could find in philosophical terms Nozick and basically Ayn Rand. And at one point, Milton Friedman, based on his own son, he says, “But if you look closely, there are some who are anarchists.” Let’s say, probably from my point of view, the person who has been the greatest inspiration in my life is essentially Murray Newton Rothbard. So therefore, there are two dimensions. One is where I want to go, and the topic is where I stand. So the most important thing is to try each day to advance further toward that ideal of anarcho-capitalism. In that sense, sometimes we face strong and harsh criticism regarding that ideal vision. I think that’s the nirvana fallacy. If you compare yourself against paradise, everything is horrible and miserable, but you don’t live in paradise. You live on earth. Basically, what you need to understand is something called the state conditions. Let’s suppose that you don’t like rectangular tables. You prefer circular tables. Now the reality is, I have only a few hours until I go and catch my flight and the table is rectangular. You like a circular table, a round one, but there isn’t one. What you have is a rectangular table. So either we do the interview here or we just can’t do it. So what do you do? You adapt to the current conditions. This is what there is now. So then you have some restrictions that you can change and others that you cannot. The idea is to modify all the ones that can be changed in the short term, and start working on those that can be modified in the medium or long-term. For example, if you really like round tables, perhaps the next interview we may do at a round table. We’re going to try and solve it, but today it’s something that we couldn’t possibly solve. So that’s basically the idea, right?

(00:15:09)
Let’s say it’s about understanding that some restrictions you can change, others you can, and there are institutional restrictions too. There are many anarcho-capitalists who are dedicated to criticizing, and incredibly, they do so with more violence towards liberals, and many of them actually criticize me, which truly make no sense because it is precisely the nirvana fallacy but the reality is that… Look, in Argentina, for example, the most popular sport is soccer. When you go to watch an Argentina match, it is beautiful. The stands are full, and they’re all painted with sky blue and white colors. There is a lot of joy. People sing songs that are very fun, that are very distinctive. It’s very much part of Argentine folklore, so to speak. But you see that beautiful show is external. That is to say it does not determine the outcome. You place the ball in the middle of the field, and no matter how much people shout, the ball doesn’t move. The one who moves the ball and scores the goals is Messi.

(00:16:31)
So what do I mean? If you don’t get involved and don’t get into it, no, you don’t do anything. So what do I know is that there are many liberals, libertarians and anarcho-capitalists who are really useless because all they do is criticize, let’s say, those of us who want to lead the world toward the ideas of freedom. And what they don’t realize is that power is a zero-sum game, and if we don’t have it, then the left will have it. Therefore, if you level your harshest criticism at those in your own ranks, you end up being subservient to socialism probably. And also, for instance, you have cases of strong hypocrisy, let’s say. I have seen cases of agorists. It’s the anarcho- capitalists who criticize Rothbard because he said that you have to get into politics, otherwise the socialists will advance. And it’s interesting because some of them, I have seen them criticizing, proposing agorism, and I remember one of them, one day the police showed up and honestly, he was peeing himself.

(00:17:57)
So it’s very easy to criticize, propose, and suggest, but if he was truly such an agonist, he should have been willing to endure going to jail. However, when it was time to face the consequences of the idea he was promoting, he froze, wet his pants and ended up, let’s say, accepting all the restrictions because clearly it was better to be out of jail than in jail. But in doing so, he sold out his ideas. So it seems to me that no, not taking into account the restrictions of the situation, only serves to be functional to socialism because all it does is strike against one’s own.

Presidency and reforms

Lex Fridman
(00:18:45)
So you became president 11 months ago. Can you, again, describe some of the actions you took? For example, you cut half the number of government ministries, layoffs, removed price controls. It’ll be interesting to lay out the first steps and what’s next.
Javier Milei
(00:19:04)
If you allow me, I will first give you a description of the situation we received, and based on that, I will tell you each of the things we did. When we first took office, basically what we found was that in the first week of December, inflation was rising at the rate of 1% per day, which means 3,700% annually. In the first half of December, it had accelerated to 7,500% annually. When you look at wholesale inflation in December of last year, it was 54%, which if annualized would equate to an inflation rate of 17,000% per year. And in addition, Argentina for the previous 10 years had not been growing, with a drop in GDP per capita of approximately 15%. And the reality was that nearly 50% were living in poverty.

(00:20:16)
Now, later I will get deeper into that discussion, and the reality is that we had a fiscal deficit, which amounted to 15% of GDP. Five points were in the Treasury, 10 points were in the Central Bank, which was endogenous monetary issuance. And the reality is that we also had interest-bearing liabilities at the Central Bank, equivalent to four monetary bases maturing in one day, meaning we could have quintupled the amount of money in one day. We had peso-denominated maturities, amounting to the equivalent of $90 billion. The Central Bank had negative net currency foreign reserves, minus $12 billion. We had commercial debts in the Central Bank equivalent to $50 billion. There were company dividends held back amounting to $10 billion. Therefore, if we had instantly opened up… You see, I say we are liberal libertarians. We are not liberal fools. That’s what some anarchist liberals suggested, meaning that we basically open everything on the first day.

(00:21:40)
So in that context, of course, if we had done that, we would’ve encountered hyperinflation. Therefore, that would have led to the number of poor people being around 95% and probably, and by December, the Peronist party would have organized supermarket’s lootings, and would’ve done all sorts of things, and would’ve probably been ousted. And by the first part of the year, the Peronists would’ve gone back to office. So to us, it was crucial to end fiscal deficit.

(00:22:19)
One of the things we promised during the campaign had been to reduce the number of ministries, and indeed we reduced to less than half the number of ministries because we went to nine ministries, today we have eight. We have also laid off a large number of civil employees. Today, I can say that we’ve already dismissed about 50,000 of them, and we practically don’t renew any contracts unless the positions are absolutely necessary. At the same time, we have stopped public works and we have eliminated discretionary transfers to the provinces. We have also diluted public sector wages. Also, we have eliminated economic subsidies by restoring utility rates to the right levels. And in that, let’s say, in this context, we achieved fiscal balance as far as the Treasury is concerned. This is very important because in the last 123 years, Argentina had a deficit for 113 of them, and in the 10 years it did not have a deficit because it was not paying the debt. So that was absolutely false, and they told us it would be impossible to do that.

(00:23:47)
We had planned to do so within a year, and they said it wasn’t possible to adjust by more than one percentage point, and we achieved fiscal balance in the month of January. That is the first month of administration. At the same time, we also cut social plans linked to intermediation. This is very important because we knew we were going to make a very tough adjustment, and we knew that this was going to have a court in social terms, and we knew that we had to offer support during the first month, I mean, the first quarter and second quarter in office. One of the things we did was to eliminate what are known as poverty managers. That is intermediaries. Basically, people have a guard through which they receive assistance, but it happens that they had to provide a counter service, and that counter service was verified by a group called the piqueteros.

(00:24:54)
So in that context, when they were going to sign, the counter service took away half of the money. So by removing that payoff, they stopped extorting them, stopped stealing their money, and with the same amount of money, they received double the resources. And of course, we also provided an additional boost. So let’s say that this is related to the five adjustment points in the Treasury. Now, what happens, as we began to achieve fiscal balance and no longer needed to issue money to finance ourselves, and as we also met interest payments and some capital repayments, one of the things that happened is that the debt market began to be recreated. So we were able to take debt out of the Central Bank and transfer it to the Treasury where it should have always been, and that meant an adjustment of approximately 10% of GDP. Everyone said this would be impossible and couldn’t be fixed.

(00:25:58)
Essentially, what we did was implement a fiscal adjustment at the Central Bank, amounting to 10% of GDP. So if you ask me, it’s clear that we have not only made the biggest fiscal adjustment in the history of humanity, because we made a fiscal adjustment of 15 points of the GDP, but also most of that went back to the people as less seigniorage, as a lower inflation rate. It’s true that we temporarily raised the country tax, but we lowered it in September, and now in December, we’re going to eliminate it. Today, for example, we also announced that in December we are eliminating import taxes. In fact, in that regard, what you have is that we return to the people 13 and a half points of GDP because the real tax burden is the size of the state. So while back in December we were discussing hyperinflation, today we are discussing 30-year loans.

(00:27:03)
In other words, all those resources that the national government used to take are now back in the private sector. And that’s what has allowed it to be very dynamic. And this has two very strong impacts. The first one is that if you look at wholesale inflation, it went down from 54% to 2%. So it went down by 27 times. It was divided into 27. So we had inflation at the rate of 17,000% annually, and it’s now close to about 28% a year, but it’s not only that. You could consider consumer inflation, the latest consumer inflation rate was 2.7%. Now, it happens that we essentially, due to a matter that is related to the Central Bank’s balance sheets and also due to the debt stocks, we still have controls in place and we are eliminating restrictions, day by day. Now, the interesting thing is that we have a 2% monthly devaluation standard, and there’s international inflation of course, which means that you then have to subtract two and a half points from the inflation observed by the consumer.

(00:28:20)
This indicates that inflation in Argentina, the true inflation, not the induced one, but the actual monetary inflation is 0.2% per month. At 0.2% per month, this equates to 2.4% annually. What I’m saying is, the original discussion was about whether inflation could reach 17,000%. Now we are bringing inflation down to levels of 2.5% annually, and that is amazing. And we achieved this by considering a number of factors. The first one is that we did not experience a previous hyperinflation, which would’ve simplified the process of implementing a stabilization program. Typically, when hyperinflation occurs, monetary assets are diluted, leading to a natural restoration of demand. And besides, we did not resort to any expropriation. For example, before the Convertibility plan, which was the most successful program in Argentina’s history, Argentina experienced two instances of hyperinflation. During Alfonsin’s administration, inflation reached 5,000%, and under Menem was 1,200%.

(00:29:34)
Additionally, there was the BONEX plan, under which debt was exchanged on a compulsory basis. In other words, what we did instead was clean up the Central Bank balance sheet. So with that, we cleaned up the Central Bank’s balance sheet. We cleared a loss of $45 billion, all voluntarily. And the most amazing thing is that we did it in just six months, and at the same time, we have not controlled prices.
Javier Milei
(00:30:00)
And at the same time, we have not controlled prices nor have we fixed the exchange rate. And this is very important. All previous stabilization programs in an effort to show quick results used to do this. What they would do is, before announcing the plan, they would adjust the rates. And once the rates were adjusted, they would launch the plan. But in our case, we couldn’t afford that luxury, so we had to implement it on the go. And also over the past few months, that is to say companies brought in rates that covered only about 10%, whereas today they cover 80% so you get the picture. Just imagine the adjustment we are making. And in that sense, it is also incredible what we have achieved because if we were to work with the inflation we have in our country today, considering the exchange rate situation, the figures are even better than during the convertibility program, which was the most successful economic program in Argentina’s history.

(00:31:09)
And in fact, there is an article called Passing the Buck, which is by Gerardo della Paolera, Bózzoli, and Irigoin that demonstrates that Menem’s first government was the best government in history. And basically, it argues two things in the success of the stabilization of the convertibility program. So if you take a closer look, when you examine it carefully, when you account for all these factors, our disinflation process is actually much more genuine. And not only that, it’s also much deeper. We are restored freedoms to Argentinians while simultaneously implementing a structural reform eight times larger. And we accomplished this with only with 15% of the representatives, 10% of the senators, and within the first six months of government. In other words, our deregulation agenda continues daily and we still have 3,200 structural reforms pending. This will ultimately make Argentina the freest country in the world.

(00:32:18)
Moreover, to have a sense of magnitude, the reforms that we already have made with the executive order 7023, and with the basis law, we have actually jumped 90 places in terms of economic freedom. What this means is that today, Argentina has institutions similar to those of Germany, France, Italy, and we obviously want this to continue. And let’s say we are going to surpass no doubt the levels of economic freedom that Ireland reached in its best moment. And not only that, we’re going to exceed the levels of economic freedom of Australia, New Zealand, and Switzerland. We are undoubtedly going to be the freest country in the world.

(00:32:59)
And this means that thanks to what we’ve done today, we are on a path that allows us to multiply our per capita GDP by 2.5 times when you apply the relevant correction. And this of course is something very interesting because it implies a huge increase in well-being. And furthermore, today the Argentinian economy is already strongly and amazingly recovering. And we can say analysts’ hypotheses were suggesting that next year we will be growing between five and 6%. Today, JP Morgan has now corrected or let’s say revised the projections upwards. And besides, when we normalized the price situation, the true poverty rate came up and it was 57% in January. Today it is at 46%, meaning we lowered poverty by 11 percentage points. Let’s say, I mean, it seems truly like a miracle. And not only that, but actually not a single job was lost in the process.

(00:34:04)
When it comes to all of this inflation reduction process, people said that our economy and economic activity would collapse. And actually when you look at the de-seasonalized data, you see that in August there was a recovery that took us back to December levels, to December levels. That means that in the year, we made the largest fiscal adjustment in the history of humanity. We will end up with less inflation, fewer poor people, better real wages, and additionally, a GDP higher than what we started with.

(00:34:39)
And if you look at it in dollars, I can assure you that the numbers are phenomenal because basically today the dollar is below the levels we had when we took office. So the reality is that in all of this, when you take my popularity levels and the government’s acceptance levels, today they are above the moment. We assumed office if you know that the moment of maximum popularity is when you take office. Therefore this means that far from resting on our laurels with this, we’re going for more reforms. We’re going to deepen the reforms. And I tell you, we won’t stop until Argentina is the freest country in the world.

(00:35:26)
Furthermore, a recent work by an Argentinian economist named Juan Pablo Nicolini was presented at the central bank’s monetary meetings and he works at the Federal Reserve. And it’s interesting because he shows that only on the basis of what we have done in fiscal matters it ensures that in the span of 10 years we can double the GDP per capita, meaning that Argentina could grow at rates of 7% annually, which is very much, very much, and that has strong consequences in terms of improving quality of life, reducing poverty, reducing indigence. Therefore, if during the worst moment our image didn’t suffer and we stayed strong in our ideas, now that everything is working much better, why should we change?

(00:36:22)
On the contrary, we are ready to redouble the bet, to redouble our efforts because we’ve done things that no one else has done. I will give you an example. There’s something that seems trivial, but there’s what’s called the single paper ballot. Argentina used to vote with huge ballots, which were above all very costly. And that reform, it never… Let’s say it wasn’t done because it always harmed the ruling party. So everyone talked about going to the single paper ballot, but no one did it when they were in power. They didn’t want to implement it because they preferred to commit fraud or use some kind of trickery to avoid applying that rule that makes the election more competitive. Well, what’s interesting, we sent that law and it was approved.

(00:37:18)
What’s more? Now we are finishing with the open, simultaneous and mandatory primaries because it was a mechanism by which politics was also stealing. We are eliminating the financing of political parties. If you look, we have reduced the fiscal pressure by 15 points to the Argentinians. We are restoring freedoms with a deep set of structural and regulatory reforms that is I think that any sensible liberal could perceive. We are already delivering a wonderful government. In fact, it’s the best government in the history of Argentina. If the best had been that of Menem, we’ve already outpaced him.

Poverty

Lex Fridman
(00:38:05)
Maybe you can explain to me the metrics of poverty and unemployment. As you said, unemployment went down, real unemployment went down, real poverty went down. But even that aside, what have been the most painful impacts of these radical reforms and how many of them are required in the short term to have a big positive impact in the long term?
Javier Milei
(00:38:31)
Let’s take it step by step, all right? That is in fact, we started to do things right, therefore we did not create poverty. The poverty was an inherited poverty. The point is that what we did was to reveal it.

(00:38:50)
I’ll try to explain it with an example that I think clarifies what’s happening in Argentina. Argentina was an economy that had a total price controls. It had a fiscal deficit which was financed through money printing. Just for you to give you an idea, in the last year, Argentina financed 13 points of the gross domestic product with money printing. In other words, a real disaster. So that situation provoked this artificially demand and puts pressure on prices. The issue is that price controls are applied additionally over the prices that they enter the price index with which inflation was… I’m not saying they were lying about it. It was distorted.

(00:39:43)
And since Argentina measures poverty and indigence by income line, then what happens? That distorted the true levels of poverty, of course. But that’s not the only effect. I mean, let’s say the real poverty levels were higher, quite a bit higher than those shown by the previous government, which showed them at 41% and also did so on a six-monthly basis. So if you, let’s say, have a growing trend, they are actually leaving you a bomb and you don’t see it because let’s say basically the indicator was measured with a delayed form. But not only that, imagine that you are also given… You are in the middle of an island alone and they give you $1 million. What can you do with that? You cannot do anything because you cannot buy anything. It is the same as if someone tells you that the price of glasses is $10, but when you want to buy it, it’s not available.

(00:40:53)
Actually, there’s a joke told by an Argentinian professor named Juan Carlos de Pablo, who says that a man goes to a bazaar and asks for a vase. Then he says to him, “Well, I want that vase. How much would you charge me?” Then he says, “$5,000.”

(00:41:10)
“Oh, okay, $5,000. But why $5,000 if across the street it’s 1,000?” He says, “Well, go buy it across the street for 1,000.”

(00:41:19)
“Ah, there’s none for 1,000.”

(00:41:21)
“Well then, here when there’s more, it’ll also cost 1,000.” In other words, prices at which they are available. So what happens? When you are faced with that situation, the supermarket shelves were empty. So what was the point of having a price at which you couldn’t buy anything? You left those prices. The shelves were empty. So the statistics showed that you were much better, but the reality is you couldn’t buy anything. You couldn’t make it happen.

(00:41:48)
So if you left the situation as it was, people were going to starve because they couldn’t buy anything. Yes, they had a certain amount of money that could supposedly buy certain goods, but those goods were not available. What is the only thing you can do to save people? Make the prices transparent and allow products to reappear. Well, when you make the prices transparent, you also make transparent the cost of the basic food basket and the total basic basket, meaning the poverty line… Sorry, the indigence line and the poverty line respectively. And when you do that, clearly you will see a jump in poverty. That brought poverty up to 57%.

(00:42:28)
Now, Argentina found its activity floor in the month of April. From that moment, Argentina began to invent a cyclical recovery. Real wages have been growing every month above inflation. Therefore, nominal wages are beating inflation. In fact, we are already at level similar to those we had in November. The same goes for pensions.

(00:42:53)
Moreover, also, let’s say there is a rebound in activity due to the recovery of the stock cycle. Therefore, this is also contributing to more and better-paid jobs. In fact, this is so strong and evident that the wages growing the most are in the informal sector. This means that poverty and extreme poverty are decreasing much faster than we imagined. But not only that, by eliminating inflation, you remove the inflationary tax, but the real burden is the fiscal deficit, which was 15 points of the GDP.

(00:43:27)
Okay, we temporarily raised the country tax, now we lower it, but we return that to the Argentinians. We gave back 15 points of the GDP. Not only that, but also when you eliminate inflation, you remove the distortion of relative prices. Therefore, the allocation of resources is much better. Not only that, but also with the strong fiscal adjustment we made, we have reduced the country risk from 3000 basis points to 770. Today, Fitch raised Argentina’s rating to CCC. So what do I mean? That translates into a lower country risk and interest rates. And that generates an increase in investment, also generates an increase in consumption.

(00:44:13)
In other words, the Argentinian economy is currently in an absolutely flourishing moment. And how is that sustained in the long term? With structural reforms which we implement daily, deregulating the economy and introducing new laws that free Argentinians from the many oppressive measures that have burdened it over the past 100 years.

Corruption

Lex Fridman
(00:44:37)
You’ve spoken about the caste, the corrupt political establishment. So there’s a lot of powerful people and groups that are against your ideas. What does it take to fight when so much power is against you?
Javier Milei
(00:44:54)
Look, we have fought against corruption like never before in Argentina. In fact, when we took office for example, there were about 900 roadblocks per year. That is people who made a habit of blocking the streets. They prevented free movement. And besides, they were given social plans and they were given a lot of money. If you remember when I started by explaining the cuts, one of the things I said was that we removed the middlemen of poverty. In other words, the managers of poverty, those who lived by stealing from the poor. Well, that is a huge source of corruption.

(00:45:35)
In fact, when we did that, two days later, one of the most renowned and influential Piqueteros called for a demonstration. He claimed that 50,000 people would attend because he was actually expecting 100,000. So he wanted to showcase it as a success. And so then let’s say with the decision made in human capital to cut their funding, the anti-blockade protocol was also enacted, where those who blocked the streets wouldn’t receive welfare benefits.

(00:46:13)
And those who broke the law would go to jail. All of that. And also we were informing this through transportation channels. Well, in that march, they expected to have 100,000 people there. And actually it turned out to be 3,000 people. And from that point on, they didn’t block the streets anymore.

(00:46:38)
We also evidently put an end to that corruption. One of the things that also generated a lot of corruption was public works. Another thing that led to significant acts of corruption were the discretionary transfers to provinces. In general, these transfers were made to the provinces with accounting as obscure as possible. So the national government, in collusion with the governors let’s say, the money ended up being used for other things. Not only that, with which we have already done many things.

(00:47:16)
Furthermore, the ministry of human capital is always filing complaints in court. Not in the media, in court. Acts of corruption like never before in Argentine history. Not only that, but also in terms of condemning corruption. That is, we have done, for example, two days ago, it was condemned, Cristina Fernández de Kirchner got a sentence for corruption, I mean, due to corruption. And the next day, that is yesterday, we took away their privileged pensions.

(00:47:52)
At the same time, we are, for example, we have discovered that Kirchnerism used disability pensions for acts of corruption. For example, there is a city that has more disability pensions than people. In other words, to give you an idea of the things being done in Argentina.

(00:48:12)
And also in Argentina, we have restored freedom to the judiciary. We do not pressure the judiciary. And this is so true that during my government, not only was Cristina Fernández de Kirchner convicted, but also the two terrorist attacks carried out by Iran were condemned. So if there is a government that is truly fighting against corruption, it is us. Not only that, but also with each deregulation, it is a privilege that we take away either from a politician, a prebendary company, or a power group. That is also very powerful. No one in Argentina has ever fought against corruption the way we have. In fact, I will move on to something that is deeply corrupt and one of my great battles, the corruption of the media and social media. That is to say, I removed the official advertising. That’s why you will see that even though we generate wonderful news, every week in large quantity, the media speak terribly. In other words, they demand to have a monopoly on the microphone. That is they are entitled to insult, hurt, offend, and they don’t want anyone to bother them, and they expect me not to even respond. That’s why a large part of journalism in Argentina hates the X Network. And that’s why the liberal libertarians love the X network, because we can all say what we want.

(00:49:49)
However, let’s say these supposed journalists who defend freedom of expression, actually what they want is to censor the ideas they don’t like. And of course, because they are leftists, because they are wokes, because they can’t stand the competition, because if they had to fight face-to-face, hand to hand, on a level playing field, when it comes to ideas, they would lose because they were a failure in the economic, social, and cultural aspects. And also, we must not forget that those murderers called socialists killed 150 million people. So they clearly cannot fight on equal terms. Therefore, they demand that social networks have censorship and that the truth cannot be told to them. Because when you tell a socialist the truth, they cry, claiming it’s hate speech. No, it’s not hate speech. It’s that you are useless people who have ruined the planet. They have made the planet much worse.

(00:50:50)
And fortunately today, thanks to social media, especially due to the enormous and brave work of Elon Musk and the role of Twitter, today X, allows information to flow, which makes it possible, let’s say, to expose politicians and also expose the media. And that’s why journalists in Argentina are so violent. Why? Because before they could, for instance, a journalist went and for example, he would go to a person and he would throw a folder at them and say, “If you don’t give me X amount of money, I am going to publish all of this and tarnish your reputation.” And I know for a fact a case of a journalist who carried out this extortion twice to a businessman, that businessman told him that he wasn’t going to pay. And evidently the journalist did it. Obviously they went to court, there was a trial, and that journalist lost both times. But that process is very slow. And in the meantime, they smeared.

(00:51:57)
So since the justice system takes a long time, so what is the problem? The problem is that in the meantime, your life got dirtied. So why can’t journalists do all this? Well, that’s why they dislike X. They dislike social media. They dislike the new form of communication because it took away their monopoly over the microphone. And by taking away the monopoly over the microphone, it removed the economic benefits of extortion.

(00:52:24)
So clearly, that’s another battle I’m fighting. You read a newspaper in Argentina, and 85% of what you read is a lie. That is to say the fundamental characteristic of most journalists, not all, but the vast majority of journalists in Argentina with some honorable exceptions, is that they are liars, slanderers, and defamers. And if the monopoly they demand were still in place that they want to reign again, I have no doubt that they will demand money in exchange for silence, because that’s what they are. They are extortionists, they are thieves, they are corrupt. And then of course, obviously when you take away a privilege from a sector, they get upset. Well, welcome to freedom.

Freedom

Lex Fridman
(00:53:14)
So you’re not only fighting for economic freedom, you’re fighting for freedom of speech?
Javier Milei
(00:53:19)
Exactly. I fight for freedom in all aspects of life. That is to say, one of the things that seems most interesting to me is that when the Berlin Wall fell, it’s true that officially it fell in the year 1989. But the reality is that the wall or socialism fell in the year 1961 when they had to build the wall. I mean, they built it because people were leaving Communist Germany for capitalist Germany. They realized that those on the western side were much better off. And of course, to prevent people from leaving. They put what a wonderful system, right? So I mean, they had to trap people. They couldn’t let them go. I mean, these are such wonderful ideas that they had to apply them at gunpoint.

(00:54:13)
Well, it’s no coincidence that they killed 150 million human beings. So what happened then? The official fall of the wall in the year 1989 made it clear that socialism had failed. In that context, the socialists, they moved the discussion of class struggle in economics and took it to other areas. So for example, socialism or what is of the 21st century or cultural Marxism or post-Marxism, whatever definition you want, is to take class struggle to different aspects of life.

(00:54:58)
For example, one of the aspects of life where you, let’s say, have this is in gender ideology. I mean, it’s incredible because the first ones to defend equality before the law were the liberals. The first to defend women’s rights were the liberals. Jeremy Bentham in the year 1750 was the first to demand equality before the law for women. I mean the cause of equality, equality before the law for women and equality of rights. The first ones who advocated for this were the liberals, did you know? However, what does the left do? They just go on to radicalize it. And then it moves to what is called female chauvinism.

(00:55:43)
Female chauvinism is, let’s say, the fight against males. And then, I mean, how do they do it? They do it by assigning rights. But when you assign a right, someone has to pay for it. And that has consequences. And in general, let’s say this always happens, the consequences are that the results are worse than what you had before. I mean, in any state intervention, the subsequent result is often worse than what you originally had. So that’s one thing. And not only that, but the other side of this is the environmental agenda, which sets man against nature involving all aspects of environmentalism and everything related to climate change.

(00:56:29)
In other words, they can’t stand any serious discussion. Therefore, all environmental policies are nothing more than an excuse to collect taxes so that a group of parasitic bureaucrats can live at the expense of others and finance sinister ideas, where the most sinister idea of all is that there is no room for everyone on planet earth. That is an idea that failed with Malthus at the beginning of the 19th century, a murderous idea that was also applied by the Egyptians against the Jews. And this is famously recorded in the Book of Shemot or Exodus.

(00:57:07)
Or for example, another thing is Black Lives matter, that is Black people against white people or indigenous people against the established communities, or I mean everything related to LGBT agendas. Definitely, these are some of the ways in which socialism extended the class struggle into other aspects of society, creating divisions and fostering deceit with the sole purpose of absorbing taxes.

(00:57:41)
I mean, what was the ministry of women in Argentina doing? Did it manage to reduce a single femicide? No, none at all. The number of femicides exploded just the same. In fact, the most feminist president in Argentine history, Mr. Alberto Fernández, used to beat his wife. That is such a strange feminist. I mean, well… So within the ranks of feminists, let’s say, you will essentially find the largest number of rapists and women beaters. And it’s quite interesting what they do. Their hypocrisy is truly striking.

(00:58:20)
It’s not just about that though. I mean, the battle is on three fronts. You have the economic front, which is free enterprise capitalism. Then we have the political level. Currently, the system that the world has designed is a Republican liberal democracy with checks and balances. And I mean, at the cultural battle level, notice that socialism has been very successful in the cultural battle. It has been very successful politically because it was able to translate that political battle in winning many elections. But why is it falling apart? Why? Because it produces misery and because the economic system is a disaster, so people eventually realize that it is making things worse for them.

(00:59:17)
Liberal, libertarians are very good when it comes to economics. Yes. And those good economic results can actually lead to the generation of solid political processes. But what happened? The liberals neglected the cultural battle. Much of the blame was placed on Fukuyama when he said, “This is the end of history.” No, it was not the end of history because the following year, in 1990, the socialists gathered at the São Paulo Forum, and based on the ideas of Gramsci, designed a strategy to infiltrate the media, culture, and education, which ended up changing the entire discourse. And they established that what they said was politically correct and that any-
Javier Milei
(01:00:00)
… was politically correct and that any idea outside of it was to be considered reactionary and had to be censored or even persecuted, and they claimed to be the ones defending freedom even though they were the ones persecuting people.

(01:00:16)
It’s the same with journalists who get upset with Twitter. They say they defend freedom but can’t stand it when those who think differently speak. Is that freedom? Yes, for them, but not for those who think differently. That’s not freedom. That’s fascism. Then, what do we say? Then we must fight on the economic front. And I believe we are implementing an extremely successful economic program that is being recognized worldwide. In fact, the other night, the president-elect, Donald Trump, indeed gave recognition for the achievements we are having in Argentina and the speed at which we have done it.

(01:00:54)
At the same time, you have to fight the political battle because, well, soccer matches are not won by shouting from the stands, they are won by playing on the field. But that alone is not enough because you have to, let’s say you need to convey to society the values of capitalism, the free market, what liberalism is, the value of freedom, right? And when you succeed in that, then we will indeed be able to advance steadily. If you don’t fight the cultural battle, what happened in Chile will happen to you. They had economic success. It was, let’s say sustained over time, but at some point it collapsed. Why did it collapse? Because they hadn’t fought the cultural battle.

(01:01:42)
Then socialism, little by little, took control of institutions in education and the media. So, they took over the media and culture and on that basis, they attacked and broke up the system. And then they found themselves with increasing doses of socialism and the only thing socialism generates is poverty. Therefore, what you must keep in mind is that you have to fight the battles on all fronts. And if you don’t keep that in mind, I can tell you are headed towards collapse.
Lex Fridman
(01:02:17)
Like you said, in this fight against corruption, you are challenging some very powerful people, a powerful establishment. Are you ever afraid for your life? Potential assassinations?
Javier Milei
(01:02:33)
No. Tell me what good is it to live life, I mean, in slavery? Look, there is a song by a Spanish singer called Nino Bravo. Just to be clear, he has already left this earth, so we can say he has passed onto the beyond. The song is called Libre, and the song, it tells the story of Peter Fetcher, an 18-year-old boy who when the separation was made, and I mean the construction of the Berlin Wall begins. His family ends up on the western side and he accidentally ends up on the eastern side. And for a whole year, he plans his escape to the western side. And in that context, when he tries to escape, he gets murdered.

(01:03:36)
So really, what is the point of life if it’s not in freedom, right? I mean what is the point of living without fighting for your values? If I am willing to give my life for my values, then what is the point of living without freedom? Look, can I tell you something interesting that happened to me here in the United States? Let’s say back in the year 1998, I came to the United States to take a series of courses to improve my English, which I never use in formal terms because as president, as you can imagine, if I make a mistake, I can create a serious situation. Fortunately, I have an interpreter who is a superstar, and if I make a mistake even in Spanish, he corrects me in the version of the other language.

(01:04:34)
And so back then, in that year, I went to San Francisco and I visited Alcatraz. You are young, but I mean the visit was an audio tour. You got a Walkman and you would choose the different tracks and listen to the story. The most interesting thing is that the Alcatraz story ended in the recreation yard where the basketball court, exercise area, and all recreational facilities were located. So anyone would have thought that this was the best part of Alcatraz. And yet, what they said in the guide was that that was the hardest part for the inmates. Why? Because I mean that recreation area in particular is built in front of the San Francisco Bay. So, the inmates could all see how San Francisco continued to build up and evolve and develop every day while they were locked up in there. They couldn’t take part in that. They were confined in that prison. And that made them fully aware of the value of freedom.

(01:05:51)
So, in my experience for me, the fight for freedom is relentless, okay? I mean my greatest hero in all of human history is Moses. The feat of Moses is like one person alone with his brother, Aaron, both confronting the combined forces of the United States, China, and Russia together. And it was Moses who said to Ramesses, “Let my people go.” Well, Ramesses resisted and the forces of heaven ran him over. But what I mean is I don’t see any other possible way to live other than with freedom. And I would always fight for full freedom and I would be at the forefront of this cause. I mean it’s a cause that I’m going to die with my boots on. I mean I’m not going to make do with living any other way other than with freedom. I will fight everything. I’m going to fight as much as it takes. At least that’s the way I feel. So, what good is it to be alive if you’re confined? What good is it to be alive if you’re not free? It’s no good. What good was it for Peter Fetcher to be alive in communist Germany? Well, at least he had a moment of happiness while he tried to escape.

Elon Musk

Lex Fridman
(01:07:26)
Another guy who fights for freedom, freedom of speech in this case, is your new friend, Elon Musk. What do you admire and what have you learned from your interactions with Elon?
Javier Milei
(01:07:39)
I have a huge admiration for Elon Musk. He is an absolutely unconventional person. He’s a great fighter for the ideas of freedom. What he has done on Twitter, now known as X, and how he is helping the world nowadays to wake up once and for all and become aware of the socialist virus, the woke virus, that in itself makes him a hero in the history of humanity. But it’s not just that.

(01:08:25)
One of the things that happened to me is that when I went to first talk to him, I thought I was going to meet a successful businessman and that I would have a typical successful businessman conversation who understands business and that some of his businesses, some of his business is slightly more exotic, but that’s the kind of talk you would expect to have. And business people are truly admirable, right? Because they are true benefactors of society, but they’re usually very much focused on their own business. And one of the things that really, really shocked me when I met Elon Musk, we had scheduled a meeting for no more than 50 minutes, the first time we were in the meeting for a little over 45 minutes because he was about to miss his flight. So obviously, if someone as important as him doesn’t fly as planned, it has to be rescheduled and he loses a lot of hours. Imagine, every minute is very valuable.

(01:09:42)
And one of the things that happened was that basically he brought up the topic of demography and we started discussing demographics and growth. I never imagined that I would end up discussing demographics and growth with him. And another very fun thing was that something funny he said to me was that since we shared our vision regarding demographic issues and the need to populate the planet, he asked me, “Now, what about you? When are you going to move in that direction?” I said, “Oh, look, I have five children.” And he said, “Well, the four-legged ones don’t count.”

(01:10:27)
That was the first meeting I had with Elon Musk. The second meeting was when, here at the universities, we started seeing anti-Semitic demonstrations where basically Palestinian flags were displayed and Jews were harassed and persecuted. And at that moment when we had that second meeting, he showed himself to be very deeply involved with that and brought up the issue of the cultural battle. So, I mean it’s not quite conventional, even in the political field.

(01:11:14)
During our last talk, which lasted for about two and a half hours, one of the things we talked about was freedom and what was at stake for the United States in this election. Therefore, he is a person, honestly. I can say he’s well above average. I mean a person of unconventional intelligence and also he’s very charming. So, I mean, again, I have a great admiration for him and I really interact very closely with him. He’s very interested in what our Ministry of Deregulation is doing, which seeks to remove regulations. But at the same time, he works with another person who is also interested in the chainsaw approach, and so I’m very pleased because they are going to try and replicate the model we are implementing in Argentina.

(01:12:21)
And also, Donald Trump himself is very enthusiastic about this and anything in the way of reducing regulations and cutting public spending and taking government out of the equation means more freedom for the people. So, I’m very pleased with what’s going on. And with Trump’s victory, because the United States will be better off, Argentina is going to be better too and the whole world is going to be better off. Today, the world is a much better place than it was just a few days ago.

DOGE

Lex Fridman
(01:12:54)
Like you said, Elon and Vivek Ramaswamy are heading the DOGE, Department of Government Efficiency. So from your experience this year as president of Argentina and every chainsaw economic policies that you’ve implemented, what advice would you give to Elon and Vivek about how to do it in the United States?
Javier Milei
(01:13:14)
Just cut to the chase. Cut to the chase. Simple as that. I’ll tell you a story and you’re going to love it. Currently in Argentina, due to the political balance we’ve achieved, we have had certain powers delegated from Congress to the executive branch, and therefore we can resolve it by decree that the deregulation minister, Federico Sturzenegger, in his ministry shows a counter that displays in front of everyone there. He displays the number of days, all right, during which the delegated powers will continue to be valid. Therefore, he has a whole deregulation division, also a public spending cut division, and government structure reduction division, and he also has an elite corps that’s cleaning up all of the laws that hinder the economic system and progress. And every day, he removes between one and five economic restrictions.

(01:14:24)
So, my advice would be for them to go all the way, to push it to the very limit, and do not give up. Do not let down their guard. Furthermore, that agenda does not have political purpose because at the end of the day, you are removing privileges. Of course, there will be people complaining, but those are people who are losing privileges, so they will have to explain society why they are keeping those privileges, and that is quite uncomfortable.

Donald Trump

Lex Fridman
(01:14:57)
You’ve spoken with Donald Trump, allegedly he called you his favorite president. What did you discuss? And maybe, again, what do you admire about President Trump and what do you learn from him?
Javier Milei
(01:15:10)
There are several things that I admire about President Trump. The first is that he probably… I think he’s provided ample proof of this in his first presidency. He understands the nature of the cultural battle. He has openly confronted socialism, his speeches openly target socialism, he perfectly understands the woke virus, and that is of great value because it means understanding what it’s all about.

(01:15:50)
Another thing I truly admire about him is his courage. In fact, thankfully, thank goodness he didn’t get assassinated or killed, but it was by a small chance occurrence that could have killed him just because he moved at the right moment. And yet, that didn’t intimidate him and he went on. And in fact, during his first campaign, and in this one as well, in the second one and third one, they criticized him, insulted him, offended him, said awful things about him, made up all sorts of horrible stories about him. In that respect, I can say I deeply relate because probably no one in our history has had such a negative campaign from all the media like they did to me. But let’s say they were quite similar.

(01:16:54)
This is why it’s so interesting, and I was so deeply moved when last night I also got to meet Sylvester Stallone, because Sylvester Stallone talks about, well, how important is that no matter how hard they hit you and keep on hitting you all the time, despite all that, you keep going on and on and on. What I’m trying to say is that so many of Sylvester Stallone’s approaches are truly inspirational, don’t you think? So imagine, I’m about to give the speech and I see Sylvester Stallone and Sylvester Stallone knows me. It was truly insane. I had to pinch myself. I mean this can’t be true.

(01:17:45)
And besides, well, the people were wonderful with me last night. They’ve been wonderful today. I’ve taken hundreds of selfies. I mean it’s truly been… I would say it’s been my break, let me say, after almost a year in office and having to face all sorts of media torture because the journalists who have vested interests and are corrupt are professional torturers. Yes, because they invade your personal life, your family, and your privacy. Let me tell you something to show you the kind of garbage the media in Argentina can do. They send three drones to spy on me at my presidential residence, to spy on me. Do you think that’s right?
Lex Fridman
(01:18:31)
No.
Javier Milei
(01:18:32)
Exactly. But that kind of thing happens in Argentina, not to mention the many lies and horrible things they say. I, for instance, remember that time when my father was hospitalized. My father is a man of a really strong character who has had two heart surgeries, all right? And one day, a journalist was saying all sorts of lies about my father. My father was hospitalized and he almost died of a heart attack. So that kind of thing is what journalism and the press do in Argentina. So they start to attack your private life, your mother, your father, your sister. Even my dogs that I absolutely adore, they are the most wonderful beings in the universe, they even target my four-legged children.

(01:19:24)
So, imagine that I’ve been in office for nearly a year, a year as president, and since they can’t criticize my management except by lying and distorting the numbers, they meddle with all these things, things they have been doing all the time since the year 2021 when I officially entered politics. And I’ve seen what they’ve done to Trump. So, that also makes me relate a lot to him because he’s a true warrior. He’s a Viking, he’s a Viking, he’s literally a Viking. I mean he’s someone I admire for how he has kept fighting in the face of adversity, even against all odds. And still he managed to win. Amazing.

(01:20:22)
And that’s why I can relate that much. And I’ve also seen how he’s been unfairly criticized, like when he was accused of protectionism or when he wanted to discuss some matters within the context of public debate regarding the design of monetary policy as regards to Fed. And basically, they have accused him of things. I mean isn’t he entitled to give an opinion as a president? I mean any citizen could give their opinion, even more so a president.

US and Argentina relations

Lex Fridman
(01:20:56)
Why is it important to you that Argentina has a close relationship with the United States?
Javier Milei
(01:21:03)
Well, to us, that is truly important, okay? Because we’ve decided to be geopolitical allies of the United States ever since our campaign, we have decided that our allies will be the United States and Israel because they basically represent the ideas of the western world, they represent the free world. That is to say, what we would call today, let’s say, a liberal democracy by confronting the autocrats. And in that sense, that is the geopolitical alignment.

(01:21:44)
Moreover, in our campaign, we were very, very clear on three main points. One, the economic pillar. We talked about cutting public spending and I would make my appearances with a chainsaw. We talked about economic freedom, deregulation, that is, and I talked about a competition of currencies, and people obviously were interested in the dollar. So, it was obvious that the economic policy was clear, all right? And not only was it clear, but we are also fulfilling it. That is the first point.

(01:22:17)
Second was our policy on security. The idea being to fight crime, I mean relentlessly as well as security, no mercy, right? And in fact, in Argentina, there are no more roadblocks, which they said were impossible to end. Not only that, we have strengthened the security forces and also our armed forces, and we are waging a tough battle against drug trafficking and narcoterrorism. Therefore, we are also strongly fulfilling that. Notice that these two points, which were the main concerns, they were the biggest concerns of Argentinians when we took office, are now in fifth and sixth place.

(01:22:59)
Today, the problem for Argentinians is corruption, whether there is unemployment, if there is poverty, but they don’t mention inflation and insecurity anymore. And besides, a third point that I made clear was that I would align with the United States and Israel internationally, and at my campaign rallies, there would be groups that would come along with flags of Israel. So, it’s clear that our international policy approach was always very clear and this is something I state during my speeches when I talk about the values of the west and the civilization of the west. In fact, yesterday, and even more so today during my speeches, I talked about how the different Greek groups or tribes go together to confront the Persians.

(01:23:58)
That is to say it seemed that from that time, 500 years before Christ until today, that struggle continues, right? But well, so of course we’re all in. We are betting on the United States becoming, once again a leader in the West. We needed someone to come back to make America great again. And as part of that process, being a commercial ally is also a great idea. So, we would really like to move forward and deepen our trade ties and our investment ties. And well, we would also like to be part of the NATO as well.
Lex Fridman
(01:24:52)
Do you think it’s still possible… One of the radical ideas you had as you were running for president was to dollarize the Argentine economy. Do you think that’s still a good idea? Are you still thinking about that?
Javier Milei
(01:25:05)
Let’s see, let’s break it down. Let’s say, if you review all my statements, I talk about currency competition. I’m not strictly talking about dollarization, I’m talking about currency competition and eliminating the central bank. If people later decide to embrace the dollar, that is their choice. Ultimately, in the model I propose, what happens is the formation of a currency basket tailored to the needs of individuals.

(01:25:38)
But I won’t avoid the discussion. Today, there is currency competition. If, for instance, today in Argentina, you want to make transactions in any currency, you can do it and it’s allowed. Today there is currency competition. The other thing we talk about is the concept of, let’s suppose we were discussing dollarization. We talk about endogenous dollarization. The first point is that you need to clean up the central bank. We had to deal with the issue of the CIRA. That is the central bank’s commercial debt, which was $50 billion. We still have to resolve the dividend problem of $10 billion. And in the meantime, we did a write-off and cleaned up the central bank’s balance sheet by $45 billion. So, you can’t just close the central bank if it is bankrupt, because you need to redeem the whole central bank debt, which is about the issuing of money and the interest-bearing liabilities. So once we finished with the interest-bearing liabilities, it’ll leave us with the monetary base.

(01:26:40)
Therefore, today we have a regime where the amount of money is fixed, the monetary base is not growing, and as demand for money increases, since people can use dollars, they don’t need to go and sell the dollars and make the peso appreciate, but they can do transactions in dollars. So as the economy grows, you will have a greater share of dollars relative to pesos. And at some point, the amount of pesos compared to the dollars will be so huge relatively that closing down the central bank will be done easily, which means this is working.

(01:27:19)
Of course, if you were to give me the money right now, I would go ahead and dollarize. I’d have no problem with that. For example, I did have a proposal for this, and this could have worked, because the largest creditor of the Argentine treasury is the central bank, but central bank bonds were trading at 20 cents. If I had sold those bonds at 20 cents and nowadays they are trading between 60 and 70. With the whole bunch of Neanderthals that are the opposition, who besides being ignorant in economics, also have bad intentions, I would be in jail today.

Messi vs Maradona

Lex Fridman
(01:28:05)
Let me ask you a very important, difficult question. I’m a huge fan, have been my whole life, of Diego Maradona and Messi. So who, to you, is the greatest football player of all time?
Javier Milei
(01:28:18)
The way I see it, I have seen Maradona play, all right? I saw Maradona play in the past, I used to watch him, and I saw him during his last year at Argentinos Juniors before Boca Juniors in the year 1980, and I saw him in ’81. Playing for Boca, I saw him play in the youth selection in Japan in 1979. I truly have immensely enjoyed the talent of Maradona, but without a doubt, the best soccer player of all time, not just from Argentina, of all time, even better than Pelé, is Messi, of course.

(01:29:04)
There is an article, which is quite old already now, titled Messi is Impossible. And it looks at all of the positions a soccer player plays in, that is all positions a soccer player can play in from midfield forward. And the most incredible thing is that Messi is the best in each of those positions. You can be the best in one or two positions. You see Cristiano Ronaldo, for example, was very good in two areas of the game. So much so that he was almost like Messi, but he didn’t take part in the rest. However, Messi is the best one in all respects. But at that time, of course. Nowadays, he’s an older player, right?
Javier Milei
(01:30:00)
He is an older player, right? And I’m not sure whether he can still keep that performance on all fronts, but honestly, I have never in my life seen a player like Messi. I have never seen no one like him, for real. Considering the goal average in the days of Pelé compared to Messi’s golden era and his career now, the number of equivalent goals is much greater than that of Pelé, therefore, without a doubt, Messi is the greatest soccer player of all time. No one compares to him.
Lex Fridman
(01:30:44)
But it’s not just the numbers or the World Cup win, it’s the moments of genius on the field. Messi is unlike any other in that way.
Javier Milei
(01:30:56)
Messi does things that seem technically impossible, they seem physically impossible. The moves he makes don’t respect human logic. It’s like watching Usain Bolt run. It doesn’t feel possible. He moves in a way that doesn’t respect human logic, am I right?
Lex Fridman
(01:31:16)
Did you watch the 1986 World Cup with Maradona, with the hand of God, with the game against England? What was that like?
Javier Milei
(01:31:24)
Oh, yes, I do remember that very well. We watched it in the home of my godfather and saw how he did his gambit and dodged the England team. It was absolutely indescribable. There’s no way to put it into words. It’s as if I asked you to describe for me the love you have for your partner. You can’t do that, right? It’s something wonderful. You can’t describe it, you cannot put it into words. There are things where words just seem to fail, am I right? I really think that there are times when humans, or some humans, not all of them, actually. Some humans have the privilege of being able to vibrate closer to God.

(01:32:35)
Some Puccini arias, for example, when you listen to them, when you listen to the famous aria from La Rondine, or the famous aria from Gianni Schicchi, you get the feeling that he was getting sat dictated by God. How can you put that into words? You can’t. There’s no way you do that. Those moments where we humans, that we have the privilege, I say it as human beings, because I’m speaking from that perspective. I say this only as an admirer.

(01:33:14)
Some human beings have the ability to vibrate so close to God that you can’t describe it, you can only enjoy it. This is why, in Judaism, they don’t use the name of God, of the Creator, because how could you put in words something like that? And I believe those are times when us humans connect closer to the Creator and create unique things, you cannot describe them. There are no words to describe that. The only thing you can do is enjoy it and be thankful that you can witness it.
Lex Fridman
(01:33:56)
You were a great footballer yourself in your youth. You were a goalkeeper. Many people would say that’s the toughest and the most important position in football. Maybe you could speak about that experience and, in general, what’s harder; being a goalkeeper or president?
Javier Milei
(01:34:13)
Lovely question. Well, indeed, I used to be a goalkeeper, but I’m not so sure about whether I was any good. But the experience of having been a goalkeeper is very valuable. First, the goalkeeper is the only player that can use their hands, in a certain sector of the pitch in the area. The other thing is that he’s also the only player who dresses differently. Moreover, their training is a solitary one. And the most important, it is the very climax, the goal, right? When the goal is called by their team, everyone is celebrating on the other side and the goalkeeper is on his own.

(01:35:18)
And at the same time, he’s the one who suffers the most when a goal is scored, because he gets the direct impact. In fact, when the goalkeeper makes a mistake, it’s an own goal. Imagine a teammate scores a wonderful goal like the one Maradona did. It’s marvelous. And that’s just one goal. And imagine the goalkeeper picks up the ball, and then, if they bring it into the area wrongly, it’s like two goals, it’s a complete lack of proportion. So, therefore, and this, in my opinion, makes goalkeepers have a very strong temperament.

(01:36:03)
They’re used to being alone, and power is precisely that. Because when you make decisions, you are on your own. And not just that, but also when you have a responsibility, like that of a president, when you make a decision, it has an impact on millions of people. So just like goalkeepers, if you make a mistake and score an own goal, and in this context it’s negative consequences for millions of people. Therefore, that has been part of the university of life that has given me the tools to be president today. That is my training in economics, my training in liberalism, having been a goalkeeper, and also having had a very tough childhood.

God

Lex Fridman
(01:36:59)
How hard is it? What’s been the personal toll of carrying the hope of a nation on your shoulders?
Javier Milei
(01:37:07)
Well, being defamed, insulted, and attacked every single day, but again, there’s no point in life if it’s not with freedom. So, like Sylvester Stallone once said, “The secret to life is to carry on in spite of the blows you get, the punches you take.” And fortunately, we have been able to carry on in spite of the blows, both coming at us from in front and from behind our backs, because it have been more honest if we had been attacked directly. But well, in Argentina politics and the mass media, they do love to attack behind your back.
Lex Fridman
(01:37:57)
What role has God played in your life? And who is God?
Javier Milei
(01:38:04)
Well, faith, I’d say, has been a very fundamental element. And especially in recent times, during which I’ve become actively involved, particularly in the teachings of Judaism and in the study of the Torah. This has given me, let’s say, a huge background to face the many adversities which I’ve encountered and had to overcome in the last few years. And as to who God is, He’s the Creator, the Maker. I call Him, The One.
Lex Fridman
(01:38:54)
What is a better guide for humanity; the invisible hand of the market or the hand of God?
Javier Milei
(01:39:00)
They’re perfectly in sync.

Elvis and Rolling Stones

Lex Fridman
(01:39:03)
Well enough. Again, going back to your youth, you were a lead singer in a rock band. Who’s the greatest rock star of all time?
Javier Milei
(01:39:12)
Okay. Well, the way I see it, the most amazing rock singer in history of mankind was definitely Elvis Presley. And my favorite band is the Rolling Stones. So I also greatly admire Mick Jagger, and I still have this dream of getting to meet him in person.
Lex Fridman
(01:39:38)
How fun would it be to play together with the Stones?
Javier Milei
(01:39:45)
That would be a big, big dream. Don’t get my hopes up, because I set goals and then I go and achieve them.
Lex Fridman
(01:39:55)
Well, I’m close friends with a band that opens for the Stones, so I would love to see this happen.
Javier Milei
(01:40:00)
Oh, well, that would be great. Or we could also watch the whole concert from the stage. I can’t keep ruining the Rolling Stones’ music. I already had a tribute band and did quite a lot of damage to their music.
Lex Fridman
(01:40:16)
How much does your rock star roots define your approach to politics, to life? Do you see yourself as a showman in part?
Javier Milei
(01:40:25)
Of course. Absolutely. My idea is that, when you attend one of our events, it feels like going to a Rolling Stones concert. In fact, in one of my most recent performances at Luna Park, I even had the pleasure of singing in front of 10,000 people. It’s on YouTube. No, sorry, not on YouTube, it’s on my Instagram feed. At that event, I sang a song called Panic Show, and the song starts by saying, “Hi, everybody. I am the lion.”
Lex Fridman
(01:41:06)
Your intensity and passion have earned you the nickname El Loco, the madman. Do you think some madness is necessary to challenge the powerful establishment?
Javier Milei
(01:41:19)
Well, maybe it’s a matter of perspective. It could be the other way around, that everyone else is crazy by living in a way contrary to the ideas of freedom. And so, maybe the sane person who wants to fix that is then considered a madman. Anyway, the nickname doesn’t bother me at all. In fact, I even enjoy it, because I’ve been called like that since I was 10 years old, so it’s not something that particularly bothers me, because it’s a nickname that… Well, it has been used for many years, but actually, if I present to you the case of San Martin, when he said he was going to cross the Andes to liberate not only Argentina, not only our country, but also Chile and Peru, and people called him crazy.

(01:42:13)
Imagine if you had tried and spoken with, I don’t know, Michelangelo, you would have called him crazy too. Or if you had talked to, I don’t know, hundreds of people who have changed the world, surely they would have thought that Einstein was crazy and so on, the list would be infinite. So, what is the difference between a madman and a genius? Success.

Free market

Lex Fridman
(01:42:45)
Let me ask you about the market. It’s so interesting, from your view of the world, how powerful the market is at figuring out what’s best for society. Why do you think the market works so well as a guide for humanity?
Javier Milei
(01:43:03)
One must first understand what the market is. Simply put, the market is a process of voluntary exchange, where individuals cooperate through the transfer of property rights, in which private property is upheld. This is the system that drives the allocation of resources. In essence, socialism, and this is what Mises condemns in his book, Socialism, shows that without private property, prices cease to exist and therefore resources are diverted. Why don’t you think it’s the same to make a road of asphalt or gold? Why not make it of gold? Because you have an understanding of economic calculation, you have an idea of prices in your mind. So, in this context, if there is no private property, there are no prices, and as a result, the free market capitalism is the best mechanism ever developed by humankind for resource allocation.

(01:44:13)
This also implies that markets must be free. Free from state intervention, because when the state intervenes, it creates interference. And markets need to allow free entry and exit, what we call competition. However, it’s better to understand competition in the sense described by Israel Gerstner, one of the foremost figures of the Austrian school. Or in the neoclassical framework as William Baumel understood it, which was the concept of free entry and exit in so-called contestable markets. And also, let’s talk about what pertains to the division of labor and social co-operation.

(01:44:57)
The most wonderful thing about capitalism is that you can only be successful by serving others with better quality goods at a better price. If you are successful in the free market capitalism, you are a hero, you are a social benefactor, you are a prosperity machine. So the better you do, the better it is for society. This is very important. I remember when I had my first meeting with Elon Musk, and this made me admire him greatly, and this is something my sister commented on too.

(01:45:38)
Elon Musk told me something he does every day. He wakes up every morning thinking about what problem he could fix for humanity. That’s amazing. Of course, what is the counterpart? Being successful. Therefore, in that sense, and moreover in my view on how the system works, on how the market works, market failures do not exist. That is to say, that is a problem. A problem for neoclassical economies because of the mathematical tools they’ve used to develop economic analysis. But actually, it’s not a real issue in everyday life, it’s a problem in the minds of economists. In fact, my latest book called Capitalism, Socialism, and the Neoclassical Trap deals precisely with this issue.
Lex Fridman
(01:46:40)
Yeah, you’ve outlined these ideas in Capitalism, Socialism, and the Neoclassical Trap. So the trap is that there’s no such thing as a middle ground. It’s either capitalism, socialism, and every middle ground ends up in a state of socialism.
Javier Milei
(01:46:55)
Well, actually, that is what Mises said. He said that there are only two systems; free enterprise capitalism and socialism. And he also pointed out, and this is proven in Hayek’s book, the Road to Serfdom, that any middle ground solution is unstable in terms of capitalism, meaning it tends towards socialism. So when you implement an intervention, it causes government failure, which then triggers further intervention, setting up a trap that results in more and more intervention. And in this context, the neoclassicals, with their market failure theory, are in fact dealing with problems that are fundamentally mathematical. Rather than making the world a better place, they have, if you will, been instrumental in increasing the levels of intervention. Let me tell you something.

(01:47:51)
Well, I have an economist as chairman of the President’s Advisory Council, Dr. Demian Reidel, who studied here at Harvard University and completed his PhD, was mentored by Kenneth Rogoff, the American economist. And Rogoff has said that Dr. Reidel was his best student. Nowadays, we’re actually working with Dr. Reidel specifically on all these issues that arise from the interventions proposed by the mainstream, such as the so-called correction of market failures. And a few days ago, he conducted a survey of search algorithms and policy recommendations, and that resulted in a map painted from red to blue.

(01:49:03)
And well, the redder it was, the more it was linked to socialism, there was an intermediate thing that was yellow, and blue was free market ideas. And one of the things he discovered, as part of that graph or chart, was that the largest number of policy recommendations, scandalously, are actually left-leaning. So that is the empirical evidence of what I pointed out in the book, Capitalism, Socialism, and the Neoclassical Trap.

Loyalty

Lex Fridman
(01:49:46)
You mentioned your four-legged children. What have you learned about life from your dogs?
Javier Milei
(01:49:54)
Well, from my four-legged children, I have learned unconditional love. In fact, well, my name in Hebrew means loyal friend, faithful friend, and on the Chinese horoscope, I am dog. And if there’s one thing that defines me is loyalty, being decent. And those virtues, you can find them in those wonderful beings that dogs are, who love unconditionally. In fact, they are superior beings, spiritually speaking in my case, because I don’t forget or forgive those who have harmed me. That is to say, all those who have insulted, defamed me, and criticized me, I remember each one of them, but I don’t have the greatness needed to forgive them.
Lex Fridman
(01:51:04)
On the topic of loyalty in politics, I’m sure there’s been a lot of people, some people, who have betrayed you. Does that hurt your heart?
Javier Milei
(01:51:17)
It depends, because you sometimes think that you can expect some people to be loyal, and if they betray you, of course that hurts. But some people, you actually don’t expect anything from them, so if there’s betrayal, you won’t be annoyed or feel bad, because you owe it to someone who didn’t share your values. But politics does have that. Sometimes, many of the people you may come across don’t have the values you advocate for, but it’s cost benefit. You need to let the ship sail on. Or would you rather let it sink? That’s not my case. I fight until the end. There are traitors, but that’s part of politics. And that’s not my line, but of course, they do exist.

Advice for young people

Lex Fridman
(01:52:23)
There are a lot of people who admire your revolutionary spirit. What advice would you give them, maybe young people, on how to live a life like yours and have an impact on the world, like you have begun to do?
Javier Milei
(01:52:40)
I didn’t do this thinking about having an impact on the world. I have defined what makes me happy and I live according to that. I live consistently by that. And most importantly, I would say never give up. Moreover, and above all, never be half-hearted. I would rather cry because I failed, rather than not crying because I never tried. I’m a perfectionist, so when I do err, of course, I have a bad time. But still, I prefer to go and get things done. If it goes wrong, it’s part of life, but I will never have to regret not having done what I thought needed to be done at that moment.

Hope for Argentina

Lex Fridman
(01:53:50)
What gives you hope about the future of Argentina and the future of humanity?
Javier Milei
(01:53:56)
Well, the fact that, thanks to social media and to the whole tech revolution going on, every day, more and more people are becoming aware of how important freedom is to live. To live in peace and prosperity. And I believe, even though bureaucrats and the elites fight untiringly to enslave us, a wave of freedom has been unleashed, which, if we do wage the fight, we’ll have a much better world.
Lex Fridman
(01:54:42)
What does your famous words of viva la libertad… How did that come about and what does it mean to you?
Javier Milei
(01:54:49)
Long live freedom, dammit. That first started while I was giving my book presentations. At the end of my presentation, I would say, “Viva la libertad, carajo,” and that really stuck with me since then. Without thinking about it, throughout my life, it was going to continue being present. In fact, today, my presentations, all of my speeches end with, “May God bless the Argentinians. May the forces of heaven be with us. And viva la libertad, carajo.” The first phrase reflects my faith in God, fervently, and that I’m deeply thankful to the Creator for the wonderful things He has bestowed upon me daily. The second one has to do with a quote from the book of Maccabees 3:19, which says that “victory in battle doesn’t depend on the size of the army, but on the forces of heaven.” This has to do with the victory of the Jewish people, the Maccabeans, against the Greeks and how they recovered the temple. And the last one, well, is my war cry.
Lex Fridman
(01:56:15)
Well, there’s no better way to end it. Thank you for being a warrior for freedom, and thank you for talking today.
Javier Milei
(01:56:21)
Thank you very much indeed for your interview. And thank you for being so well-educated, because very often interviewers are not like that. And you did have windows to play foul and you didn’t, and I recognize that and I thank you for that.
Lex Fridman
(01:56:37)
Thank you.

(01:56:39)
Thanks for listening to this conversation with Javier Milei. To support this podcast, please check out our sponsors in the description. And now, let me leave you with some words from George Orwell. “In a time of deceit, telling the truth is a revolutionary act.” Thank you for listening and hope to see you next time.

Transcript for Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity | Lex Fridman Podcast #452

This is a transcript of Lex Fridman Podcast #452 with Dario Amodei.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Introduction

Dario Amodei
(00:00:00)
If you extrapolate the curves that we’ve had so far, right? If you say, “Well, I don’t know, we’re starting to get to PhD level, and last year we were at undergraduate level, and the year before we were at the level of a high school student,” again, you can quibble with what tasks and for what. “We’re still missing modalities, but those are being added,” like computer use was added, like image generation has been added. If you just kind of eyeball the rate at which these capabilities are increasing, it does make you think that we’ll get there by 2026 or 2027.

(00:00:31)
I think there are still worlds where it doesn’t happen in 100 years. The number of those worlds is rapidly decreasing. We are rapidly running out of truly convincing blockers, truly compelling reasons why this will not happen in the next few years. The scale-up is very quick. We do this today, we make a model, and then we deploy thousands, maybe tens of thousands of instances of it. I think by the time, certainly within two to three years, whether we have these super powerful AIs or not, clusters are going to get to the size where you’ll be able to deploy millions of these.

(00:01:03)
I am optimistic about meaning. I worry about economics and the concentration of power. That’s actually what I worry about more, the abuse of power.
Lex Fridman
(00:01:14)
And AI increases the amount of power in the world. And if you concentrate that power and abuse that power, it can do immeasurable damage.
Dario Amodei
(00:01:22)
Yes, it’s very frightening. It’s very frightening.
Lex Fridman
(00:01:27)
The following is a conversation with Dario Amodei, CEO of Anthropic, the company that created Claude, that is currently and often at the top of most LLM benchmark leader boards. On top of that, Dario and the Anthropic team have been outspoken advocates for taking the topic of AI safety very seriously. And they have continued to publish a lot of fascinating AI on this and other topics.

(00:01:55)
I’m also joined afterwards by two other brilliant people from Anthropic. First Amanda Askell, who is a researcher working on alignment and fine-tuning of Claude, including the design of Claude’s character and personality. A few folks told me she has probably talked with Claude more than any human at Anthropic. So she was definitely a fascinating person to talk to about prompt engineering and practical advice on how to get the best out of Claude.

(00:02:27)
After that, Chris Olah stopped by for a chat. He’s one of the pioneers of the field of mechanistic interpretability, which is an exciting set of efforts that aims to reverse engineering neural networks, to figure out what’s going on inside, inferring behaviors from neural activation patterns inside the network. This is a very promising approach for keeping future super-intelligent AI systems safe. For example, by detecting from the activations when the model is trying to deceive the human it is talking to.

(00:03:03)
This is the Lex Fridman podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Dario Amodei.

Scaling laws

Lex Fridman
(00:03:14)
Let’s start with a big idea of scaling laws and the scaling hypothesis. What is it? What is its history, and where do we stand today?
Dario Amodei
(00:03:22)
So I can only describe it as it relates to my own experience, but I’ve been in the AI field for about 10 years and it was something I noticed very early on. So I first joined the AI world when I was working at Baidu with Andrew Ng in late 2014, which is almost exactly 10 years ago now. And the first thing we worked on, was speech recognition systems. And in those days I think deep learning was a new thing. It had made lots of progress, but everyone was always saying, “We don’t have the algorithms we need to succeed. We are only matching a tiny fraction. There’s so much we need to discover algorithmically. We haven’t found the picture of how to match the human brain.”

(00:04:05)
And in some ways it was fortunate, you can have almost beginner’s luck. I was like a newcomer to the field. And I looked at the neural net that we were using for speech, the recurrent neural networks, and I said, “I don’t know, what if you make them bigger and give them more layers? And what if you scale up the data along with this?” I just saw these as independent dials that you could turn. And I noticed that the models started to do better and better as you gave them more data, as you made the models larger, as you trained them for longer. And I didn’t measure things precisely in those days, but along with colleagues, we very much got the informal sense that the more data and the more compute and the more training you put into these models, the better they perform.

(00:04:51)
And so initially my thinking was, “Hey, maybe that is just true for speech recognition systems. Maybe that’s just one particular quirk, one particular area.” I think it wasn’t until 2017 when I first saw the results from GPT-1 that it clicked for me that language is probably the area in which we can do this. We can get trillions of words of language data, we can train on them. And the models we were trained in those days were tiny. You could train them on one to eight GPUs, whereas now we train jobs on tens of thousands, soon going to hundreds of thousands of GPUs.

(00:05:28)
And so when I saw those two things together, and there were a few people like Ilya Sudskever who you’ve interviewed, who had somewhat similar views. He might’ve been the first one, although I think a few people came to similar views around the same time, right? There was Rich Sutton’s bitter lesson, Gwern wrote about the scaling hypothesis. But I think somewhere between 2014 and 2017 was when it really clicked for me, when I really got conviction that, “Hey, we’re going to be able to these incredibly wide cognitive tasks if we just scale up the models.”

(00:06:03)
And at every stage of scaling, there are always arguments. And when I first heard them honestly, I thought, “Probably I’m the one who’s wrong and all these experts in the field are right. They know the situation better than I do, right?” There’s the Chomsky argument about, “You can get syntactics but you can’t get semantics.” There was this idea, “Oh, you can make a sentence make sense, but you can’t make a paragraph make sense.” The latest one we have today is, “We’re going to run out of data, or the data isn’t high quality enough or models can’t reason.”

(00:06:34)
And each time, every time, we manage to either find a way around or scaling just is the way around. Sometimes it’s one, sometimes it’s the other. And so I’m now at this point, I still think it’s always quite uncertain. We have nothing but inductive inference to tell us that the next two years are going to be like the last 10 years. But I’ve seen the movie enough times, I’ve seen the story happen for enough times to really believe that probably the scaling is going to continue, and that there’s some magic to it that we haven’t really explained on a theoretical basis yet.
Lex Fridman
(00:07:10)
And of course the scaling here is bigger networks, bigger data, bigger compute?
Dario Amodei
(00:07:16)
Yes.
Lex Fridman
(00:07:17)
All of those?
Dario Amodei
(00:07:17)
In particular, linear scaling up of bigger networks, bigger training times and more and more data. So all of these things, almost like a chemical reaction, you have three ingredients in the chemical reaction and you need to linearly scale up the three ingredients. If you scale up one, not the others, you run out of the other reagents and the reaction stops. But if you scale up everything in series, then the reaction can proceed.
Lex Fridman
(00:07:45)
And of course now that you have this kind of empirical science/art, you can apply it to other more nuanced things like scaling laws applied to interpretability or scaling laws applied to post-training. Or just seeing how does this thing scale. But the big scaling law, I guess the underlying scaling hypothesis has to do with big networks, big data leads to intelligence?
Dario Amodei
(00:08:09)
Yeah, we’ve documented scaling laws in lots of domains other than language. So initially the paper we did that first showed it, was in early 2020, where we first showed it for language. There was then some work late in 2020 where we showed the same thing for other modalities like images, video, text to image, image to text, math. They all had the same pattern. And you’re right, now there are other stages like post-training or there are new types of reasoning models. And in all of those cases that we’ve measured, we see similar types of scaling laws.
Lex Fridman
(00:08:48)
A bit of a philosophical question, but what’s your intuition about why bigger is better in terms of network size and data size? Why does it lead to more intelligent models?
Dario Amodei
(00:09:00)
So in my previous career as a biophysicist… So I did a physics undergrad and then biophysics in grad school. So I think back to what I know as a physicist, which is actually much less than what some of my colleagues at Anthropic have in terms of expertise in physics. There’s this concept called the one over F noise and one over X distributions, where often, just like if you add up a bunch of natural processes, you get a Gaussian, if you add up a bunch of differently-distributed natural processes… If you take a probe and hook it up to a resistor, the distribution of the thermal noise in the resistor goes as one over the frequency. It’s some kind of natural convergent distribution.

(00:09:50)
And I think what it amounts to, is that if you look at a lot of things that are produced by some natural process that has a lot of different scales, not a Gaussian, which is kind of narrowly distributed, but if I look at large and small fluctuations that lead to electrical noise, they have this decaying one over X distribution. And so now I think of patterns in the physical world or in language. If I think about the patterns in language, there are some really simple patterns, some words are much more common than others, like the. Then there’s basic noun-verb structure. Then there’s the fact that nouns and verbs have to agree, they have to coordinate. And there’s the higher-level sentence structure. Then there’s the thematic structure of paragraphs. And so the fact that there’s this regressing structure, you can imagine that as you make the networks larger, first they capture the really simple correlations, the really simple patterns, and there’s this long tail of other patterns.

(00:10:49)
And if that long tail of other patterns is really smooth like it is with the one over F noise in physical processes like resistors, then you can imagine as you make the network larger, it’s kind of capturing more and more of that distribution. And so that smoothness gets reflected in how well the models are at predicting and how well they perform.

(00:11:10)
Language is an evolved process. We’ve developed language, we have common words and less common words. We have common expressions and less common expressions. We have ideas, cliches, that are expressed frequently, and we have novel ideas. And that process has developed, has evolved with humans over millions of years. And so the guess, and this is pure speculation, would be that there’s some kind of long tail distribution of the distribution of these ideas.
Lex Fridman
(00:11:41)
So there’s the long tail, but also there’s the height of the hierarchy of concepts that you’re building up. So the bigger the network, presumably you have a higher capacity to-
Dario Amodei
(00:11:50)
Exactly. If you have a small network, you only get the common stuff. If I take a tiny neural network, it’s very good at understanding that a sentence has to have verb, adjective, noun, but it’s terrible at deciding what those verb adjective and noun should be and whether they should make sense. If I make it just a little bigger, it gets good at that, then suddenly it’s good at the sentences, but it’s not good at the paragraphs. And so these rarer and more complex patterns get picked up as I add more capacity to the network.

Limits of LLM scaling

Lex Fridman
(00:12:20)
Well, the natural question then is what’s the ceiling of this?
Dario Amodei
(00:12:24)
Yeah.
Lex Fridman
(00:12:24)
How complicated and complex is the real world? How much is the stuff is there to learn?
Dario Amodei
(00:12:30)
I don’t think any of us knows the answer to that question. My strong instinct would be that there’s no ceiling below the level of humans. We humans are able to understand these various patterns. And so that makes me think that if we continue to scale up these models to kind of develop new methods for training them and scaling them up, that will at least get to the level that we’ve gotten to with humans. There’s then a question of how much more is it possible to understand than humans do? How much is it possible to be smarter and more perceptive than humans? I would guess the answer has got to be domain-dependent.

(00:13:09)
If I look at an area like biology, and I wrote this essay, Machines of Loving Grace, it seems to me that humans are struggling to understand the complexity of biology. If you go to Stanford or to Harvard or to Berkeley, you have whole departments of folks trying to study the immune system or metabolic pathways, and each person understands only a tiny bit, a part of it, specializes. And they’re struggling to combine their knowledge with that of other humans. And so I have an instinct that there’s a lot of room at the top for AIs to get smarter.

(00:13:46)
If I think of something like materials in the physical world, or addressing conflicts between humans or something like that, I mean it may be there’s only some of these problems are not intractable, but much harder. And it may be that there’s only so well you can do at some of these things. Just like with speech recognition, there’s only so clear I can hear your speech. So I think in some areas there may be ceilings that are very close to what humans have done. In other areas, those ceilings may be very far away. I think we’ll only find out when we build these systems. It’s very hard to know in advance. We can speculate, but we can’t be sure.
Lex Fridman
(00:14:26)
And in some domains, the ceiling might have to do with human bureaucracies and things like this, as you write about.
Dario Amodei
(00:14:31)
Yes.
Lex Fridman
(00:14:31)
So humans fundamentally has to be part of the loop. That’s the cause of the ceiling, not maybe the limits of the intelligence.
Dario Amodei
(00:14:38)
Yeah, I think in many cases, in theory, technology could change very fast. For example, all the things that we might invent with respect to biology, but remember, there’s a clinical trial system that we have to go through to actually administer these things to humans. I think that’s a mixture of things that are unnecessary in bureaucratic and things that kind of protect the integrity of society. And the whole challenge is that it’s hard to tell what’s going on. It’s hard to tell which is which.

(00:15:11)
I think in terms of drug development, my view is that we’re too slow and we’re too conservative. But certainly if you get these things wrong, it’s possible to risk people’s lives by being too reckless. And so at least some of these human institutions are in fact protecting people. So it’s all about finding the balance. I strongly suspect that balance is kind of more on the side of wishing to make things happen faster, but there is a balance.
Lex Fridman
(00:15:39)
If we do hit a limit, if we do hit a slowdown in the scaling laws, what do you think would be the reason? Is it compute-limited, data-limited? Is it something else? Idea limited?
Dario Amodei
(00:15:51)
So a few things, now we’re talking about hitting the limit before we get to the level of humans and the skill of humans. So I think one that’s popular today, and I think could be a limit that we run into, like most of the limits, I would bet against it, but it’s definitely possible, is we simply run out of data. There’s only so much data on the internet, and there’s issues with the quality of the data. You can get hundreds of trillions of words on the internet, but a lot of it is repetitive or it’s search engine optimization drivel, or maybe in the future it’ll even be text generated by AIs itself. And so I think there are limits to what can be produced in this way.

(00:16:34)
That said, we, and I would guess other companies, are working on ways to make data synthetic, where you can use the model to generate more data of the type that you have already, or even generate data from scratch. If you think about what was done with DeepMind’s AlphaGo Zero, they managed to get a bot all the way from no ability to play Go whatsoever to above human level, just by playing against itself. There was no example data from humans required in the AlphaGo Zero version of it.

(00:17:07)
The other direction of course, is these reasoning models that do chain of thought and stop to think and reflect on their own thinking. In a way that’s another kind of synthetic data coupled with reinforcement learning. So my guess is with one of those methods, we’ll get around the data limitation or there may be other sources of data that are available. We could just observe that, even if there’s no problem with data, as we start to scale models up, they just stopped getting better. It seemed to be a reliable observation that they’ve gotten better, that could just stop at some point for a reason we don’t understand.

(00:17:43)
The answer could be that we need to invent some new architecture. There have been problems in the past with say, numerical stability of models where it looked like things were leveling off, but actually when we found the right unblocker, they didn’t end up doing so. So perhaps there’s some new optimization method or some new technique we need to unblock things. I’ve seen no evidence of that so far, but if things were to slow down, that perhaps could be one reason.
Lex Fridman
(00:18:15)
What about the limits of compute, meaning the expensive nature of building bigger and bigger data centers?
Dario Amodei
(00:18:23)
So right now, I think most of the frontier model companies, I would guess, are operating in roughly 1 billion scale, plus or minus a factor of three. Those are the models that exist now or are being trained now. I think next year we’re going to go to a few billion, and then 2026, we may go to above 10 billion. And probably by 2027, their ambitions to build hundred billion dollar clusters. And I think all of that actually will happen. There’s a lot of determination to build the compute, to do it within this country, and I would guess that it actually does happen.

(00:19:02)
Now, if we get to a hundred billion, that’s still not enough compute, that’s still not enough scale, then either we need even more scale, or we need to develop some way of doing it more efficiently of shifting the curve. I think between all of these, one of the reasons I’m bullish about powerful AI happening so fast, is just that if you extrapolate the next few points on the curve, we’re very quickly getting towards human level ability.

(00:19:28)
Some of the new models that we developed, some reasoning models that have come from other companies, they’re starting to get to what I would call the PhD or professional level. If you look at their coding ability, the latest model we released, Sonnet 3.5, the new or updated version, it gets something like 50% on SWE-bench. And SWE-bench is an example of a bunch of professional real-world software engineering tasks. At the beginning of the year, I think the state of the art was 3 or 4%. So in 10 months we’ve gone from 3% to 50% on this task. And I think in another year we’ll probably be at 90%. I mean, I don’t know, but might even be less than that.

(00:20:11)
We’ve seen similar things in graduate-level math, physics, and biology from models like OpenAi’s o1. So if we just continue to extrapolate this in terms of skill that we have, I think if we extrapolate the straight curve, within a few years, we will get to these models being above the highest professional level in terms of humans. Now, will that curve continue? You’ve pointed to, and I’ve pointed to a lot of possible reasons why that might not happen. But if the extrapolation curve continues, that is the trajectory we’re on.

Competition with OpenAI, Google, xAI, Meta

Lex Fridman
(00:20:46)
So Anthropic has several competitors. It’d be interesting to get your sort of view of it all. OpenAI, Google, XAI, Meta. What does it take to win in the broad sense of win in this space?
Dario Amodei
(00:20:58)
Yeah, so I want to separate out a couple things, right? Anthropic’s mission is to kind of try to make this all go well. And we have a theory of change called Race to the Top. Race to the Top is about trying to push the other players to do the right thing by setting an example. It’s not about being the good guy, it’s about setting things up so that all of us can be the good guy.

(00:21:24)
I’ll give a few examples of this. Early in the history of Anthropic, one of our co-founders, Chris Olah, who I believe you’re interviewing soon, he’s the co-founder of the field of mechanistic interpretability, which is an attempt to understand what’s going on inside AI models. So we had him and one of our early teams focus on this area of interpretability, which we think is good for making models safe and transparent.

(00:21:48)
For three or four years that had no commercial application whatsoever. It still doesn’t. Today we’re doing some early betas with it, and probably it will eventually, but this is a very, very long research bed, and one in which we’ve built in public and shared our results publicly. And we did this because we think it’s a way to make models safer. An interesting thing is that as we’ve done this, other companies have started doing it as well. In some cases because they’ve been inspired by it, in some cases because they’re worried that if other companies are doing this, look more responsible, they want to look more responsible too. No one wants to look like the irresponsible actor. And so they adopt this as well. When folks come to Anthropic, interpretability is often a draw, and I tell them, “The other places you didn’t go, tell them why you came here.” And then you see soon that there’s interpretability teams elsewhere as well.

(00:22:47)
And in a way that takes away our competitive advantage, because it’s like, “Oh, now others are doing it as well.” But it’s good for the broader system, and so we have to invent some new thing that we’re doing that others aren’t doing as well. And the hope is to basically bid up the importance of doing the right thing. And it’s not about us in particular. It’s not about having one particular good guy. Other companies can do this as well. If they join the race to do this, that’s the best news ever. It’s about shaping the incentives to point upward instead of shaping the incentives to point downward.
Lex Fridman
(00:23:25)
And we should say this example of the field of mechanistic interpretability is just a rigorous non-hand wavy wave doing AI safety-
Dario Amodei
(00:23:34)
Yes.
Lex Fridman
(00:23:34)
… or it’s tending that way.
Dario Amodei
(00:23:36)
Trying to. I mean, I think we’re still early in terms of our ability to see things, but I’ve been surprised at how much we’ve been able to look inside these systems and understand what we see. Unlike with the scaling laws where it feels like there’s some law that’s driving these models to perform better, on the inside, the models aren’t… There’s no reason why they should be designed for us to understand them, right? They’re designed to operate, they’re designed to work. Just like the human brain or human biochemistry. They’re not designed for a human to open up the hatch, look inside and understand them. But we have found, and you can talk in much more detail about this to Chris, that when we open them up, when we do look inside them, we find things that are surprisingly interesting.
Lex Fridman
(00:24:20)
And as a side effect, you also get to see the beauty of these models. You get to explore the beautiful nature of large neural networks through the MEC and TERP kind of methodology.
Dario Amodei
(00:24:29)
I’m amazed at how clean it’s been. I’m amazed at things like induction heads. I’m amazed at things like that we can use sparse auto-encoders to find these directions within the networks, and that the directions correspond to these very clear concepts.

(00:24:49)
We demonstrated this a bit with the Golden Gate Bridge Claude. So this was an experiment where we found a direction inside one of the neural networks layers that corresponded to the Golden Gate Bridge. And we just turned that way up. And so we released this model as a demo, it was kind of half a joke, for a couple days, but it was illustrative of the method we developed. And you could take the model, you could ask it about anything. It would be like you could say, “How was your day?” And anything you asked, because this feature was activated, it would connect to the Golden Gate Bridge. So it would say, I’m feeling relaxed and expansive, much like the arches of the Golden Gate Bridge, or-
Lex Fridman
(00:25:31)
It would masterfully change topic to the Golden Gate Bridge and integrate it. There was also a sadness to the focus it had on the Golden Gate Bridge. I think people quickly fell in love with it, I think. So people already miss it, because it was taken down, I think after a day.
Dario Amodei
(00:25:45)
Somehow these interventions on the model, where you kind of adjust its behavior, somehow emotionally made it seem more human than any other version of the model.
Lex Fridman
(00:25:56)
It’s a strong personality, strong identity.
Dario Amodei
(00:25:58)
It has a strong personality. It has these kind of obsessive interests. We can all think of someone who’s obsessed with something. So it does make it feel somehow a bit more human.

Claude

Lex Fridman
(00:26:08)
Let’s talk about the present. Let’s talk about Claude. So this year, a lot has happened. In March. Claude 3 Opus, Sonnet, Haiku were released. Then Claude 3.5 Sonnet in July, with an updated version just now released. And then also Claude 3.5 Haiku was released. Okay. Can you explain the difference between Opus, Sonnet and Haiku, and how we should think about the different versions?
Dario Amodei
(00:26:34)
Yeah, so let’s go back to March when we first released these three models. So our thinking was different companies produce large and small models, better and worse models. We felt that there was demand, both for a really powerful model, and that might be a little bit slower that you’d have to pay more for, and also for fast cheap models that are as smart as they can be for how fast and cheap. Whenever you want to do some kind of difficult analysis, like if I want to write code for instance, or I want to brainstorm ideas or I want to do creative writing, I want the really powerful model.

(00:27:15)
But then there’s a lot of practical applications in a business sense where it’s like I’m interacting with a website, I am doing my taxes, or I’m talking to a legal advisor and I want to analyze a contract. Or we have plenty of companies that are just like, I want to do auto-complete on my IDE or something. And for all of those things, you want to act fast and you want to use the model very broadly. So we wanted to serve that whole spectrum of needs. So we ended up with this kind of poetry theme. And so what’s a really short poem? It’s a haiku. Haiku is the small, fast, cheap model that was at the time, was really surprisingly intelligent for how fast and cheap it was.

(00:28:03)
Sonnet is a medium-sized poem, write a couple paragraphs. And so Sonnet was the middle model. It is smarter but also a little bit slower, a little bit more expensive. And Opus, like a Magnum Opus is a large work, Opus was the largest, smartest model at the time. So that was the original kind of thinking behind it.

(00:28:24)
And our thinking then was, “Well, each new generation of models should shift that trade- off curve.” So when we released Sonnet 3.5, it has roughly the same cost and speed as the Sonnet 3 model, but it increased its intelligence to the point where it was smarter than the original Opus 3 model. Especially for code, but also just in general. And so now we’ve shown results for Haiku 3.5. And I believe Haiku 3.5, the smallest new model, is about as good as Opus 3, the largest old model. So basically the aim here is to shift the curve and then at some point there’s going to be an Opus 3.5.

(00:29:13)
Now every new generation of models has its own thing. They use new data, their personality changes in ways that we try to steer but are not fully able to steer. And so there’s never quite that exact equivalence, where the only thing you’re changing is intelligence. We always try and improve other things and some things change without us knowing or measuring. So it’s very much an inexact science. In many ways, the manner and personality of these models is more an art than it is a science.

Opus 3.5

Lex Fridman
(00:29:44)
So what is the reason for the span of time between say, Claude Opus 3.0 and 3.5? What takes that time, if you can speak to it?
Dario Amodei
(00:29:58)
Yeah, so there’s different processes. There’s pre-training, which is just kind of the normal language model training. And that takes a very long time. That uses, these days, tens of thousands, sometimes many tens of thousands of GPUs or TPUs or training them, or we use different platforms, but accelerator chips, often training for months.

(00:30:26)
There’s then a kind of post-training phase where we do reinforcement learning from human feedback as well as other kinds of reinforcement learning. That phase is getting larger and larger now, and often that’s less of an exact science. It often takes effort to get it right. Models are then tested with some of our early partners to see how good they are, and they’re then tested, both internally and externally, for their safety, particularly for catastrophic and autonomy risks. So we do internal testing according to our responsible scaling policy, which I could talk more about that in detail.

(00:31:06)
And then we have an agreement with the US and the UK AI Safety Institute, as well as other third-party testers in specific domains, to test the models for what are called CBRN risks, chemical, biological, radiological, and nuclear. We don’t think that models pose these risks seriously yet, but every new model we want to evaluate to see if we’re starting to get close to some of these more dangerous capabilities. So those are the phases, and then it just takes some time to get the model working in terms of inference and launching it in the API. So there’s just a lot of steps to actually making a model work. And of course, we’re always trying to make the processes as streamlined as possible.

(00:31:55)
We want our safety testing to be rigorous, but we want it to be rigorous and to be automatic, to happen as fast as it can, without compromising on rigor. Same with our pre-training process and our post-training process. So it’s just building anything else. It’s just like building airplanes. You want to make them safe, but you want to make the process streamlined. And I think the creative tension between those is an important thing in making the models work.
Lex Fridman
(00:32:20)
Yeah, rumor on the street, I forget who was saying that, Anthropic has really good tooling. So probably a lot of the challenge here is, on the software engineering side, is to build the tooling to have a efficient, low-friction interaction with the infrastructure.
Dario Amodei
(00:32:36)
You would be surprised how much of the challenges of building these models comes down to software engineering, performance engineering. From the outside, you might think, “Oh man, we had this Eureka breakthrough.” You know, this movie with the science. “We discovered it, we figured it out.” But I think all things, even incredible discoveries, they almost always come down to the details. And often super, super boring details. I can’t speak to whether we have better tooling than other companies. I mean, haven’t been at those other companies, at least not recently, but it’s certainly something we give a lot of attention to.
Lex Fridman
(00:33:18)
I don’t know if you can say, but from Claude 3 to Claude 3.5, is there any extra pre-training going on, or is it mostly focused on the post-training? There’s been leaps in performance.
Dario Amodei
(00:33:29)
Yeah, I think at any given stage, we’re focused on improving everything at once. Just naturally. Like, there are different teams. Each team makes progress in a particular area, in making their particular segment of the relay race better. And it’s just natural that when we make a new model, we put all of these things in at once.
Lex Fridman
(00:33:50)
So the data you have, the preference data you get from RLHF, is there ways to apply it to newer models as it get trained up?
Dario Amodei
(00:34:00)
Yeah. Preference data from old models sometimes gets used for new models, although of course it performs somewhat better when it’s trained on the new models. Note that we have this constitutional AI method such that we don’t only use preference data, there’s also a post-training process where we train the model against itself. And there’s new types of post-training the model against itself that are used every day. So it’s not just RLHF, a bunch of other methods as well. Post-training, I think, is becoming more and more sophisticated.

Sonnet 3.5

Lex Fridman
(00:34:30)
Well, what explains the big leap in performance for the new Sonnet 3.5, I mean, at least in the programming side? And maybe this is a good place to talk about benchmarks. What does it mean to get better? Just the number went up, but I program, but I also love programming, and I Claude 3.5 through Cursor is what I use to assist me in programming. And there was, at least experientially, anecdotally, it’s gotten smarter at programming. So what does it take to get it smarter?
Dario Amodei
(00:34:30)
We-
Lex Fridman
(00:35:00)
So what does it take to get it smarter?
Dario Amodei
(00:35:03)
We observe that as well. By the way, there were a couple very strong engineers here at Anthropic, who all previous code models, both produced by us and produced by all the other companies, hadn’t really been useful to them. They said, “Maybe this is useful to a beginner. It’s not useful to me.” But Sonnet 3.5, the original one for the first time, they said, “Oh, my God, this helped me with something that it would’ve taken me hours to do. This is the first model that’s actually saved me time.”

(00:35:31)
So again, the water line is rising. And then I think the new Sonnet has been even better. In terms of what it takes, I’ll just say it’s been across the board. It’s in the pre-training, it’s in the post-training, it’s in various evaluations that we do. We’ve observed this as well. And if we go into the details of the benchmark, so SWE-bench is basically… Since you’re a programmer, you’ll be familiar with pull requests, and just pull requests, they’re like a sort of atomic unit of work. You could say I’m implementing one thing.

(00:36:12)
So SWE-bench actually gives you a real world situation where the code base is in a current state and I’m trying to implement something that’s described in language. We have internal benchmarks where we measure the same thing and you say, “Just give the model free rein to do anything, run anything, edit anything. How well is it able to complete these tasks?” And it’s that benchmark that’s gone from “it can do it 3% of the time” to “it can do it about 50% of the time.”

(00:36:43)
So I actually do believe that you can gain benchmarks, but I think if we get to 100% on that benchmark in a way that isn’t over-trained or game for that particular benchmark, probably represents a real and serious increase in programming ability. And I would suspect that if we can get to 90, 95% that it will represent ability to autonomously do a significant fraction of software engineering tasks.
Lex Fridman
(00:37:13)
Well, ridiculous timeline question. When is Claude Opus 3.5 coming up?
Dario Amodei
(00:37:19)
Not giving you an exact date, but as far as we know, the plan is still to have a Claude 3.5 Opus.
Lex Fridman
(00:37:28)
Are we going to get it before GTA 6 or no?
Dario Amodei
(00:37:30)
Like Duke Nukem Forever?
Lex Fridman
(00:37:30)
Duke Nukem. Right.
Dario Amodei
(00:37:32)
What was that game? There was some game that was delayed 15 years.
Lex Fridman
(00:37:32)
That’s right.
Dario Amodei
(00:37:34)
Was that Duke Nukem Forever?
Lex Fridman
(00:37:36)
Yeah. And I think GTA is now just releasing trailers.
Dario Amodei
(00:37:39)
It’s only been three months since we released the first Sonnet.
Lex Fridman
(00:37:42)
Yeah, it’s the incredible pace of release.
Dario Amodei
(00:37:45)
It just tells you about the pace, the expectations for when things are going to come out.

Claude 4.0

Lex Fridman
(00:37:49)
So what about 4.0? So how do you think, as these models get bigger and bigger, about versioning and also just versioning in general, why Sonnet 3.5 updated with the date? Why not Sonnet 3.6, which a lot of people are calling it?
Dario Amodei
(00:38:06)
Naming is actually an interesting challenge here, right? Because I think a year ago, most of the model was pre-training. And so you could start from the beginning and just say, “Okay, we’re going to have models of different sizes. We’re going to train them all together and we’ll have a family of naming schemes and then we’ll put some new magic into them and then we’ll have the next generation.”

(00:38:26)
The trouble starts already when some of them take a lot longer than others to train. That already messes up your time a little bit. But as you make big improvement in pre-training, then you suddenly notice, “Oh, I can make better pre-train model.” And that doesn’t take very long to do, but clearly it has the same size and shape of previous models. So I think those two together as well as the timing issues. Any kind of scheme you come up with, the reality tends to frustrate that scheme, right? It tends to break out of the scheme.

(00:39:04)
It’s not like software where you can say, “Oh, this is 3.7, this is 3.8.” No, you have models with different trade-offs. You can change some things in your models, you can change other things. Some are faster and slower at inference. Some have to be more expensive, some have to be less expensive. And so I think all the companies have struggled with this. I think we were in a good position in terms of naming when we had Haiku, Sonnet and Opus.
Lex Fridman
(00:39:31)
It was great, great start.
Dario Amodei
(00:39:32)
We’re trying to maintain it, but it’s not perfect, so we’ll try and get back to the simplicity. But just the nature of the field, I feel like no one’s figured out naming. It’s somehow a different paradigm from normal software and so none of the companies have been perfect at it. It’s something we struggle with surprisingly much relative to how trivial it is for the grand science of training the models.
Lex Fridman
(00:40:03)
So from the user side, the user experience of the updated Sonnet 3.5 is just different than the previous June 2024 Sonnet 3.5. It would be nice to come up with some kind of labeling that embodies that. Because people talk about Sonnet 3.5, but now there’s a different one. And so how do you refer to the previous one and the new one when there’s a distinct improvement? It just makes conversation about it just challenging.
Dario Amodei
(00:40:34)
Yeah, yeah. I definitely think this question of there are lots of properties of the models that are not reflected in the benchmarks. I think that’s definitely the case and everyone agrees. And not all of them are capabilities. Models can be polite or brusque, they can be very reactive or they can ask you questions. They can have what feels like a warm personality or a cold personality. They can be boring or they can be very distinctive like Golden Gate Claude was.

(00:41:10)
And we have a whole team focused on, I think we call it Claude character. Amanda leads that team and we’ll talk to you about that, but it’s still a very inexact science and often we find that models have properties that we’re not aware of. The fact of the matter is that you can talk to a model 10,000 times and there are some behaviors you might not see just like with a human, right?

(00:41:36)
I can know someone for a few months and not know that they have a certain skill or not know that there’s a certain side to them. And so I think we just have to get used to this idea. And we’re always looking for better ways of testing our models to demonstrate these capabilities and also to decide which are the personality properties we want models to have and which we don’t want to have. That itself, the normative question, is also super interesting.

Criticism of Claude

Lex Fridman
(00:42:02)
I got to ask you a question from Reddit.
Dario Amodei
(00:42:04)
From Reddit? Oh, boy.
Lex Fridman
(00:42:07)
There’s just this fascinating, to me at least, it’s a psychological social phenomenon where people report that Claude has gotten dumber for them over time. And so the question is, does the user complaint about the dumbing down of Claude 3.5 Sonnet hold any water? So are these anecdotal reports a kind of social phenomena or is there any cases where Claude would get dumber?
Dario Amodei
(00:42:33)
So this actually doesn’t apply. This isn’t just about Claude. I believe I’ve seen these complaints for every foundation model produced by a major company. People said this about GPT-4, they said it about GPT-4 Turbo. So a couple things. One, the actual weights of the model, the actual brain of the model, that does not change unless we introduce a new model. There are just a number of reasons why it would not make sense practically to be randomly substituting in new versions of the model.

(00:43:09)
It’s difficult from an inference perspective and it’s actually hard to control all the consequences of changing the weights of the model. Let’s say you wanted to fine-tune the model, I don’t know, to say “certainly” less, which an old version of Sonnet used to do. You actually end up changing 100 things as well. So we have a whole process for it and we have a whole process for modifying the model. We do a bunch of testing on it. We do a bunch of user testing in early customers.

(00:43:36)
So we both have never changed the weights of the model without telling anyone. And certainly, in the current setup, it would not make sense to do that. Now, there are a couple things that we do occasionally do. One is sometimes we run A/B tests, but those are typically very close to when a model is being released and for a very small fraction of time.

(00:44:01)
So the day before the new Sonnet 3.5, I agree we should have had a better name. It’s clunky to refer to it. There were some comments from people that it’s gotten a lot better and that’s because a fraction we’re exposed to an A/B test for those one or two days. The other is that occasionally the system prompt will change. The system prompt can have some effects, although it’s unlikely to dumb down models, it’s unlikely to make them dumber.

(00:44:32)
And we’ve seen that while these two things, which I’m listing to be very complete, happened quite infrequently, the complaints for us and for other model companies about the model change, the model isn’t good at this, the model got more censored, the model was dumbed down. Those complaints are constant and so I don’t want to say people are imagining it or anything, but the models are, for the most part, not changing. If I were to offer a theory, I think it actually relates to one of the things I said before, which is that models are very complex and have many aspects to them. And so often, if I ask the model a question, if I’m like, “Do task X” versus, “Can you do task X?” the model might respond in different ways. And so there are all kinds of subtle things that you can change about the way you interact with the model that can give you very different results.

(00:45:33)
To be clear, this itself is like a failing by us and by the other model providers that the models are just often sensitive to small changes in wording. It’s yet another way in which the science of how these models work is very poorly developed. And so if I go to sleep one night and I was talking to the model in a certain way and I slightly changed the phrasing of how I talk to the model, I could get different results.

(00:45:58)
So that’s one possible way. The other thing is, man, it’s just hard to quantify this stuff. It’s hard to quantify this stuff. I think people are very excited by new models when they come out and then as time goes on, they become very aware of their limitations. So that may be another effect, but that’s all a very long-winded way of saying for the most part, with some fairly narrow exceptions, the models are not changing.
Lex Fridman
(00:46:22)
I think there is a psychological effect. You just start getting used to it, the baseline raises. When people who have first gotten Wi-Fi on airplanes, it’s amazing, magic.
Dario Amodei
(00:46:32)
It’s amazing. Yeah.
Lex Fridman
(00:46:32)
And then you start-
Dario Amodei
(00:46:33)
And now I’m like, “I can’t get this thing to work. This is such a piece of crap.”
Lex Fridman
(00:46:36)
Exactly. So it’s easy to have the conspiracy theory of, “They’re making Wi-Fi slower and slower.” This is probably something I’ll talk to Amanda much more about, but another Reddit question, “When will Claude stop trying to be my pure tentacle grandmother imposing its moral worldview on me as a paying customer? And also, what is the psychology behind making Claude overly apologetic?” So this reports about the experience, a different angle on the frustration. It has to do with the character [inaudible 00:47:06].
Dario Amodei
(00:47:06)
Yeah, so a couple points on this first. One is things that people say on Reddit and Twitter or X or whatever it is, there’s actually a huge distribution shift between the stuff that people complain loudly about on social media and what actually statistically users care about and that drives people to use the models.

(00:47:27)
People are frustrated with things like the model not writing out all the code or the model just not being as good at code as it could be, even though it’s the best model in the world on code. I think the majority of things are about that, but certainly a vocal minority raise these concerns, are frustrated by the model refusing things that it shouldn’t refuse or apologizing too much or just having these annoying verbal tics.

(00:47:59)
The second caveat, and I just want to say this super clearly because I think some people don’t know it, others know it, but forget it. It is very difficult to control across the board how the models behave. You cannot just reach in there and say, “Oh, I want the model to apologize less.” You can do that. You can include training data that says, “Oh, the model should apologize less.” But then in some other situation, they end up being super rude or overconfident in a way that’s misleading people.

(00:48:30)
So there are all these trade-offs. For example, another thing is if there was a period during which models, ours and I think others as well, were too verbose, they would repeat themselves, they would say too much. You can cut down on the verbosity by penalizing the models for just talking for too long. What happens when you do that, if you do it in a crude way, is when the models are coding, sometimes they’ll say, “Rest of the code goes here,” right?

(00:48:58)
Because they’ve learned that that’s the way to economize and that they see it. And then so that leads the model to be so-called lazy in coding where they’re just like, “Ah, you can finish the rest of it.” It’s not because we want to save on compute or because the models are lazy during winter break or any of the other conspiracy theories that have come up. Actually, it’s just very hard to control the behavior of the model, to steer the behavior of the model in all circumstances at once.

(00:49:28)
There’s this whack- a-mole aspect where you push on one thing and these other things start to move as well that you may not even notice or measure. And so one of the reasons that I care so much about grand alignment of these AI systems in the future is actually, these systems are actually quite unpredictable. They’re actually quite hard to steer and control. And this version we’re seeing today of you make one thing better, it makes another thing worse, I think that’s like a present day analog of future control problems in AI systems that we can start to study today.

(00:50:12)
I think that difficulty in steering the behavior and making sure that if we push an AI system in one direction, it doesn’t push it in another direction in some other ways that we didn’t want. I think that’s an early sign of things to come, and if we can do a good job of solving this problem of you ask the model to make and distribute smallpox and it says no, but it’s willing to help you in your graduate level virology class, how do we get both of those things at once? It’s hard.

(00:50:48)
It’s very easy to go to one side or the other and it’s a multidimensional problem. And so I think these questions of shaping the model’s personality, I think they’re very hard. I think we haven’t done perfectly on them. I think we’ve actually done the best of all the AI companies, but still so far from perfect.

(00:51:08)
And I think if we can get this right, if we can control the false positives and false negatives in this very controlled present day environment, we’ll be much better at doing it for the future when our worry is: will the models be super autonomous? Will they be able to make very dangerous things? Will they be able to autonomously build whole companies and are those companies aligned? So I think of this present task as both vexing but also good practice for the future.
Lex Fridman
(00:51:40)
What’s the current best way of gathering user feedback? Not anecdotal data, but just large-scale data about pain points or the opposite of pain points, positive things, so on? Is it internal testing? Is it a specific group testing, A/B testing? What works?
Dario Amodei
(00:51:59)
So typically, we’ll have internal model bashings where all of Anthropic… Anthropic is almost 1,000 people. People just try and break the model. They try and interact with it various ways. We have a suite of evals for, “Oh, is the model refusing in ways that it couldn’t?” I think we even had a “certainly” eval because again, at one point, the model had this problem where it had this annoying tick where it would respond to a wide range of questions by saying, “Certainly, I can help you with that. Certainly, I would be happy to do that. Certainly, this is correct.”

(00:52:34)
And so we had a “certainly” eval, which is: how often does the model say certainly? But look, this is just a whack-a-mole. What if it switches from “certainly” to “definitely”? So every time we add a new eval and we’re always evaluating for all the old things, we have hundreds of these evaluations, but we find that there’s no substitute for a human interacting with it.

(00:52:56)
And so it’s very much like the ordinary product development process. We have hundreds of people within Anthropic bash the model. Then we do external A/B tests. Sometimes we’ll run tests with contractors. We pay contractors to interact with the model. So you put all of these things together and it’s still not perfect. You still see behaviors that you don’t quite want to see. You still see the model refusing things that it just doesn’t make sense to refuse.

(00:53:25)
But I think trying to solve this challenge, trying to stop the model from doing genuinely bad things that everyone agrees it shouldn’t do, everyone agrees that the model shouldn’t talk about, I don’t know, child abuse material. Everyone agrees the model shouldn’t do that, but at the same time, that it doesn’t refuse in these dumb and stupid ways.

(00:53:49)
I think drawing that line as finely as possible, approaching perfectly, is still a challenge and we’re getting better at it every day, but there’s a lot to be solved. And again, I would point to that as an indicator of a challenge ahead in terms of steering much more powerful models.
Lex Fridman
(00:54:06)
Do you think Claude 4.0 is ever coming out?
Dario Amodei
(00:54:11)
I don’t want to commit to any naming scheme because if I say here, “We’re going to have Claude 4 next year,” and then we decide that we should start over because there’s a new type of model, I don’t want to commit to it. I would expect in a normal course of business that Claude 4 would come after Claude 3. 5, but you never know in this wacky field.
Lex Fridman
(00:54:34)
But this idea of scaling is continuing.
Dario Amodei
(00:54:38)
Scaling is continuing. There will definitely be more powerful models coming from us than the models that exist today. That is certain. Or if there aren’t, we’ve deeply failed as a company.

AI Safety Levels

Lex Fridman
(00:54:49)
Okay. Can you explain the responsible scaling policy and the AI safety level standards, ASL levels?
Dario Amodei
(00:54:55)
As much as I am excited about the benefits of these models, and we’ll talk about that if we talk about Machines of Loving Grace, I’m worried about the risks and I continue to be worried about the risks. No one should think that Machines of Loving Grace was me saying I’m no longer worried about the risks of these models. I think they’re two sides of the same coin.

(00:55:16)
The power of the models and their ability to solve all these problems in biology, neuroscience, economic development, governance and peace, large parts of the economy, those come with risks as well, right? With great power comes great responsibility. The two are paired. Things that are powerful can do good things and they can do bad things. I think of those risks as being in several different categories, perhaps the two biggest risks that I think about. And that’s not to say that there aren’t risks today that are important, but when I think of really the things that would happen on the grandest scale, one is what I call catastrophic misuse.

(00:55:59)
These are misuse of the models in domains like cyber, bio, radiological, nuclear, things that could harm or even kill thousands, even millions of people if they really, really go wrong. These are the number one priority to prevent. And here I would just make a simple observation, which is that the models, if I look today at people who have done really bad things in the world, I think actually humanity has been protected by the fact that the overlap between really smart, well-educated people and people who want to do really horrific things has generally been small.

(00:56:44)
Let’s say I’m someone who I have a PhD in this field, I have a well-paying job. There’s so much to lose. Even assuming I’m completely evil, which most people are not, why would such a person risk their life, risk their legacy, their reputation to do something truly, truly evil? If we had a lot more people like that, the world would be a much more dangerous place. And so my worry is that by being a much more intelligent agent, AI could break that correlation.

(00:57:21)
And so I do have serious worries about that. I believe we can prevent those worries. But I think as a counterpoint to Machines of Loving Grace, I want to say that there’s still serious risks. And the second range of risks would be the autonomy risks, which is the idea that models might, on their own, particularly as we give them more agency than they’ve had in the past, particularly as we give them supervision over wider tasks like writing whole code bases or someday even effectively operating entire companies, they’re on a long enough leash. Are they doing what we really want them to do?

(00:58:00)
It’s very difficult to even understand in detail what they’re doing, let alone control it. And like I said, these early signs that it’s hard to perfectly draw the boundary between things the model should do and things the model shouldn’t do that if you go to one side, you get things that are annoying and useless and you go to the other side, you get other behaviors. If you fix one thing, it creates other problems.

(00:58:25)
We’re getting better and better at solving this. I don’t think this is an unsolvable problem. I think this is a science like the safety of airplanes or the safety of cars or the safety of drugs. I don’t think there’s any big thing we’re missing. I just think we need to get better at controlling these models. And so these are the two risks I’m worried about. And our responsible scaling plan, which I’ll recognize is a very long-winded answer to your question.
Lex Fridman
(00:58:49)
I love it. I love it.
Dario Amodei
(00:58:51)
Our responsible scaling plan is designed to address these two types of risks. And so every time we develop a new model, we basically test it for its ability to do both of these bad things. So if I were to back up a little bit, I think we have an interesting dilemma with AI systems where they’re not yet powerful enough to present these catastrophes. I don’t know if they’ll ever present these catastrophes. It’s possible they won’t.

(00:59:22)
But the case for worry, the case for risk is strong enough that we should act now and they’re getting better very, very fast. I testified in the Senate that we might have serious bio risks within two to three years. That was about a year ago. Things have proceeded apace. So we have this thing where it’s surprisingly hard to address these risks because they’re not here today, they don’t exist. They’re like ghosts, but they’re coming at us so fast because the models are improving so fast.

(00:59:56)
So how do you deal with something that’s not here today, doesn’t exist, but is coming at us very fast? So the solution we came up with for that, in collaboration with people like the organization METR and Paul Christiano is what you need for that are you need tests to tell you when the risk is getting close. You need an early warning system. And so every time we have a new model, we test it for its capability to do these CBRN tasks as well as testing it for how capable it is of doing tasks autonomously on its own.

(01:00:35)
And in the latest version of our RSP, which we released in the last month or two, the way we test autonomy risks is the AI model’s ability to do aspects of AI research itself, which when the AI models can do AI research, they become truly, truly autonomous. And that threshold is important for a bunch of other ways. And so what do we then do with these tasks? The RSP basically develops what we’ve called an if-then structure, which is if the models pass a certain capability, then we impose a certain set of safety and security requirements on them.

(01:01:16)
So today’s models are what’s called ASL-2. Models that were ASL-1 is for systems that manifestly don’t pose any risk of autonomy or misuse. So for example, a chess playing bot, Deep Blue would be ASL-1. It’s just manifestly the case that you can’t use Deep Blue for anything other than chess. It was just designed for chess. No one’s going to use it to conduct a masterful cyber attack or to run wild and take over the world.

(01:01:47)
ASL-2 is today’s AI systems where we’ve measured them and we think these systems are simply not smart enough to autonomously self-replicate or conduct a bunch of tasks and also not smart enough to provide meaningful information about CBRN risks and how to build CBRN weapons above and beyond what can be known from looking at Google. In fact, sometimes they do provide information above and beyond a search engine, but not in a way that can be stitched together, not in a way that end-to-end is dangerous enough.

(01:02:26)
So ASL-3 is going to be the point at which the models are helpful enough to enhance the capabilities of non-state actors, right? State actors can already do, unfortunately, to a high level of proficiency, a lot of these very dangerous and destructive things. The difference is that non-state actors are not capable of it. And so when we get to ASL-3, we’ll take special security precautions designed to be sufficient to prevent theft of the model by non-state actors and misuse of the model as it’s deployed. We’ll have to have enhanced filters targeted at these particular areas.
Lex Fridman
(01:03:07)
Cyber, bio, nuclear.
Dario Amodei
(01:03:09)
Cyber, bio, nuclear and model autonomy, which is less a misuse risk and more a risk of the model doing bad things itself. ASL-4, getting to the point where these models could enhance the capability of a already knowledgeable state actor and/or become the main source of such a risk. If you wanted to engage in such a risk, the main way you would do it is through a model. And then I think ASL-4 on the autonomy side, it’s some amount of acceleration in AI research capabilities with an AI model.

(01:03:45)
And then ASL-5 is where we would get to the models that are truly capable that it could exceed humanity in their ability to do any of these tasks. And so the point of the if-then structure commitment is basically to say, “Look, I don’t know, I’ve been working with these models for many years and I’ve been worried about risk for many years. It’s actually dangerous to cry wolf. It’s actually dangerous to say this model is risky. And people look at it and they say this is manifestly not dangerous.” Again, it’s the delicacy of the risk isn’t here today, but it’s coming at us fast.

(01:04:27)
How do you deal with that? It’s really vexing to a risk planner to deal with it. And so this if-then structure basically says, “Look, we don’t want to antagonize a bunch of people, we don’t want to harm our own ability to have a place in the conversation by imposing these very onerous burdens on models that are not dangerous today.” So the if-then, the trigger commitment is basically a way to deal with this. It says you clamp down hard when you can show the model is dangerous.

(01:04:58)
And of course, what has to come with that is enough of a buffer threshold that you’re not at high risk of missing the danger. It’s not a perfect framework. We’ve had to change it. We came out with a new one just a few weeks ago and probably going forward, we might release new ones multiple times a year because it’s hard to get these policies right technically, organizationally from a research perspective. But that is the proposal, if-then commitments and triggers in order to minimize burdens and false alarms now, but really react appropriately when the dangers are here.

ASL-3 and ASL-4

Lex Fridman
(01:05:37)
What do you think the timeline for ASL-3 is where several of the triggers are fired? And what do you think the timeline is for ASL-4?
Dario Amodei
(01:05:44)
Yeah. So that is hotly debated within the company. We are working actively to prepare ASL-3 security measures as well as ASL-3 deployment measures. I’m not going to go into detail, but we’ve made a lot of progress on both and we’re prepared to be, I think, ready quite soon. I would not be surprised at all if we hit ASL-3 next year. There was some concern that we might even hit it this year. That’s still possible. That could still happen. It’s very hard to say, but I would be very, very surprised if it was 2030. I think it’s much sooner than that.
Lex Fridman
(01:06:24)
So there’s protocols for detecting it, the if-then and then there’s protocols for how to respond to it.
Dario Amodei
(01:06:31)
Yes.
Lex Fridman
(01:06:32)
How difficult is the second, the latter?
Dario Amodei
(01:06:34)
Yeah. I think for ASL-3, it’s primarily about security and about filters on the model relating to a very narrow set of areas when we deploy the model. Because at ASL-3, the model isn’t autonomous yet. And so you don’t have to worry about the model itself behaving in a bad way even when it’s deployed internally. So I think the ASL- 3 measures are, I won’t say straightforward, they’re rigorous, but they’re easier to reason about.

(01:07:06)
I think once we get to ASL-4, we start to have worries about the models being smart enough that they might sandbag tests, they might not tell the truth about tests. We had some results came out about sleeper agents and there was a more recent paper about, “Can the models mislead attempts to sandbag their own abilities, present themselves as being less capable than they are?” And so I think with ASL-4, there’s going to be an important component of using other things than just interacting with the models.

(01:07:43)
For example, interpretability or hidden chains of thought where you have to look inside the model and verify via some other mechanism that is not as easily corrupted as what the model says, that the model indeed has some property. So we’re still working on ASL-4. One of the properties of the RSP is that we don’t specify ASL-4 until we’ve hit ASL-3. And I think that’s proven to be a wise decision because even with ASL-3, again, it’s hard to know this stuff in detail, and we want to take as much time as we can possibly take to get these things right.
Lex Fridman
(01:08:23)
So for ASL-3, the bad actor will be the humans.
Dario Amodei
(01:08:26)
Humans, yes.
Lex Fridman
(01:08:27)
And so there’s a little bit more…
Dario Amodei
(01:08:29)
For ASL- 4, it’s both, I think.
Lex Fridman
(01:08:31)
It’s both. And so deception, and that’s where mechanistic interpretability comes into play, and hopefully the techniques used for that are not made accessible to the model.
Dario Amodei
(01:08:42)
Yeah. Of course, you can hook up the mechanistic interpretability to the model itself, but then you’ve lost it as a reliable indicator of the model state. There are a bunch of exotic ways you can think of that it might also not be reliable, like if the model gets smart enough that it can jump computers and read the code where you’re looking at its internal state. We’ve thought about some of those. I think they’re exotic enough. There are ways to render them unlikely. But yeah, generally, you want to preserve mechanistic interpretability as a verification set or test set that’s separate from the training process of the model.
Lex Fridman
(01:09:19)
See, I think as these models become better and better conversation and become smarter, social engineer becomes a threat too because they could start being very convincing to the engineers inside companies.
Dario Amodei
(01:09:30)
Oh, yeah. Yeah. We’ve seen lots of examples of demagoguery in our life from humans, and there’s a concern that models could do that as well.

Computer use

Lex Fridman
(01:09:40)
One of the ways that Claude has been getting more and more powerful is it’s now able to do some agentic stuff, computer use. There’s also an analysis within the sandbox of Claude.ai itself. But let’s talk about computer use. That seems to me super exciting that you can just give Claude a task and it takes a bunch of actions, figures it out, and has access to the…
Lex Fridman
(01:10:00)
… a bunch of actions, figures it out and has access to your computer through screenshots. So can you explain how that works and where that’s headed?
Dario Amodei
(01:10:10)
Yeah. It’s actually relatively simple. So Claude has had for a long time, since Claude 3 back in March, the ability to analyze images and respond to them with text. The only new thing we added is those images can be screenshots of a computer and in response, we train the model to give a location on the screen where you can click and/or buttons on the keyboard, you can press in order to take action. And it turns out that with actually not all that much additional training, the models can get quite good at that task. It’s a good example of generalization. People sometimes say if you get to lower earth orbit, you’re halfway to anywhere because of how much it takes to escape the gravity well. If you have a strong pre-trained model, I feel like you’re halfway to anywhere in terms of the intelligence space. And so actually, it didn’t take all that much to get Claude to do this. And you can just set that in a loop, give the model a screenshot, tell it what to click on, give it the next screenshot, tell it what to click on and that turns into a full kind of almost 3D video interaction of the model and it’s able to do all of these tasks. We showed these demos where it’s able to fill out spreadsheets, it’s able to kind of interact with a website, it’s able to open all kinds of programs, different operating systems, Windows, Linux, Mac. So I think all of that is very exciting. I will say, while in theory there’s nothing you could do there that you couldn’t have done through just giving the model the API to drive the computer screen, this really lowers the barrier. And there’s a lot of folks who either aren’t in a position to interact with those APIs or it takes them a long time to do.

(01:12:00)
It’s just the screen is just a universal interface that’s a lot easier to interact with. And so I expect over time, this is going to lower a bunch of barriers. Now, honestly, the current model has, it leaves a lot still to be desired and we were honest about that in the blog. It makes mistakes, it misclicks. We were careful to warn people, “Hey, you can’t just leave this thing to run on your computer for minutes and minutes. You got to give this thing boundaries and guardrails.” And I think that’s one of the reasons we released it first in an API form rather than just hand the consumer and give it control of their computer. But I definitely feel that it’s important to get these capabilities out there. As models get more powerful, we’re going to have to grapple with how do we use these capabilities safely. How do we prevent them from being abused?

(01:12:54)
And I think releasing the model while the capabilities are still limited is very helpful in terms of doing that. I think since it’s been released, a number of customers, I think Replit was maybe one of the most quickest to deploy things, have made use of it in various ways. People have hooked up demos for Windows desktops, Macs, Linux machines. So yeah, it’s been very exciting. I think as with anything else, it comes with new exciting abilities and then with those new exciting abilities, we have to think about how to make the model safe, reliable, do what humans want them to do. It’s the same story for everything. Same thing. It’s that same tension.
Lex Fridman
(01:13:51)
But the possibility of use cases here, just the range is incredible. So how much to make it work really well in the future? How much do you have to specially kind of go beyond what the pre-trained model is doing, do more post-training, RLHF or supervised fine-tuning or synthetic data just for the agentive stuff?
Dario Amodei
(01:14:10)
Yeah. I think speaking at a high level, it’s our intention to keep investing a lot in making the model better. I think we look at some of the benchmarks where previous models were like, “Oh, could do it 6% of the time,” and now our model would do it 14 or 22% of the time. And yeah, we want to get up to the human level reliability of 80, 90% just like anywhere else. We’re on the same curve that we were on with SWE-bench where I think I would guess a year from now, the models can do this very, very reliably. But you got to start somewhere.
Lex Fridman
(01:14:41)
So you think it’s possible to get to the human level 90% basically doing the same thing you’re doing now or it has to be special for computer use?
Dario Amodei
(01:14:49)
It depends what you mean by special and special in general, but I generally think the same kinds of techniques that we’ve been using to train the current model, I expect that doubling down on those techniques in the same way that we have for code, for models in general, for image input, for voice, I expect those same techniques will scale here as they have everywhere else,
Lex Fridman
(01:15:18)
But this is giving the power of action to Claude and so you could do a lot of really powerful things, but you could do a lot of damage also.
Dario Amodei
(01:15:27)
Yeah, yeah. No and we’ve been very aware of that. Look, my view actually is computer use isn’t a fundamentally new capability like the CBRN or autonomy capabilities are. It’s more like it kind of opens the aperture for the model to use and apply its existing abilities. And so the way we think about it, going back to our RSP, is nothing that this model is doing inherently increases the risk from an RSP perspective, but as the models get more powerful, having this capability may make it scarier once it has the cognitive capability to do something at the ASL-3 and ASL-4 level, this may be the thing that kind of unbounds it from doing so. So going forward, certainly this modality of interaction is something we have tested for and that we will continue to test for an RSP going forward. I think it’s probably better to learn and explore this capability before the model is super capable
Lex Fridman
(01:16:33)
Yeah. And there’s a lot of interesting attacks like prompt injection because now you’ve widened the aperture so you can prompt inject through stuff on screen. So if this becomes more and more useful, then there’s more and more benefit to inject stuff into the model. If it goes to certain web page, it could be harmless stuff like advertisements or it could be harmful stuff, right?
Dario Amodei
(01:16:53)
Yeah, we’ve thought a lot about things like spam, CAPTCHA, mass… One secret, I’ll tell you, if you’ve invented a new technology, not necessarily the biggest misuse, but the first misuse you’ll see, scams, just petty scams.
Lex Fridman
(01:17:10)
Yeah.
Dario Amodei
(01:17:13)
It’s like a thing as old, people scamming each other, it’s this thing as old as time. And it’s just every time, you got to deal with it.
Lex Fridman
(01:17:21)
It’s almost silly to say, but it’s true, sort of bots and spam in general is a thing as it gets more and more intelligent-
Dario Amodei
(01:17:29)
Yeah, yeah.
Lex Fridman
(01:17:29)
… it’s harder and harder to fight it.
Dario Amodei
(01:17:32)
Like I said, there are a lot of petty criminals in the world and it’s like every new technology is a new way for petty criminals to do something stupid and malicious.
Lex Fridman
(01:17:45)
Is there any ideas about sandboxing it? How difficult is the sandboxing task?
Dario Amodei
(01:17:49)
Yeah, we sandbox during training. So for example, during training we didn’t expose the model to the internet. I think that’s probably a bad idea during training because the model can be changing its policy, it can be changing what it’s doing and it’s having an effect in the real world. In terms of actually deploying the model, it kind of depends on the application. Sometimes you want the model to do something in the real world. But of course, you can always put guard, you can always put guard rails on the outside. You can say, “Okay, well, this model’s not going to move data from my, the model’s not going to move any files from my computer or my web server to anywhere else.”

(01:18:27)
Now, when you talk about sandboxing, again, when we get to ASL-4, none of these precautions are going to make sense there. When you talk about ASL-4, you’re then, the model is being, there’s theoretical worry the model could be smart enough to kind of break it to out of any box. And so there, we need to think about mechanistic interpretability. If we’re going to have a sandbox, it would need to be a mathematically provable. That’s a whole different world than what we’re dealing with with the models today.
Lex Fridman
(01:19:01)
Yeah, the science of building a box from which ASL-4 AI system cannot escape.
Dario Amodei
(01:19:08)
I think it’s probably not the right approach. I think the right approach, instead of having something unaligned that you’re trying to prevent it from escaping, I think it’s better to just design the model the right way or have a loop where you look inside the model and you’re able to verify properties and that gives you an opportunity to tell, iterate and actually get it right. I think containing bad models is a much worse solution than having good models.

Government regulation of AI

Lex Fridman
(01:19:36)
Let me ask about regulation. What’s the role of regulation in keeping AI safe? So for example, can you describe California AI regulation bill SB 1047 that was ultimately vetoed by the governor? What are the pros and cons of this bill in general?
Dario Amodei
(01:19:50)
Yes, we ended up making some suggestions to the bill. And then some of those were adopted and we felt, I think, quite positively about the bill by the end of that, it did still have some downsides. And of course, it got vetoed. I think at a high level, I think some of the key ideas behind the bill are I would say similar to ideas behind our RSPs. And I think it’s very important that some jurisdiction, whether it’s California or the federal government and/or other countries and other states, passes some regulation like this. And I can talk through why I think that’s so important. So I feel good about our RSP. It’s not perfect. It needs to be iterated on a lot. But it’s been a good forcing function for getting the company to take these risks seriously, to put them into product planning, to really make them a central part of work at Anthropic and to make sure that all of a thousand people, and it’s almost a thousand people now at Anthropic, understand that this is one of the highest priorities of the company, if not the highest priority.

(01:20:58)
But one, there are still some companies that don’t have RSP like mechanisms, like OpenAI, Google did adopt these mechanisms a couple months after Anthropic did, but there are other companies out there that don’t have these mechanisms at all. And so if some companies adopt these mechanisms and others don’t, it’s really going to create a situation where some of these dangers have the property that it doesn’t matter if three out of five of the companies are being safe, if the other two are being unsafe, it creates this negative externality. And I think the lack of uniformity is not fair to those of us who have put a lot of effort into being very thoughtful about these procedures. The second thing is I don’t think you can trust these companies to adhere to these voluntary plans on their own. Right? I like to think that Anthropic will, we do everything we can that we will, our RSP is checked by our long-term benefit trust, so we do everything we can to adhere to our own RSP.

(01:22:07)
But you hear lots of things about various companies saying, “Oh, they said they would give this much compute and they didn’t. They said they would do this thing and the didn’t.” I don’t think it makes sense to litigate particular things that companies have done, but I think this broad principle that if there’s nothing watching over them, if there’s nothing watching over us as an industry, there’s no guarantee that we’ll do the right thing and the stakes are very high. And so I think it’s important to have a uniform standard that everyone follows and to make sure that simply that the industry does what a majority of the industry has already said is important and has already said that they definitely will do.

(01:22:52)
Right, some people, I think there’s a class of people who are against regulation on principle. I understand where that comes from. If you go to Europe and you see something like GDPR, you see some of the other stuff that they’ve done. Some of it’s good, but some of it is really unnecessarily burdensome and I think it’s fair to say really has slowed innovation. And so I understand where people are coming from on priors. I understand why people start from that position. But again, I think AI is different. If we go to the very serious risks of autonomy and misuse that I talked about just a few minutes ago, I think that those are unusual and they warrant an unusually strong response. And so I think it’s very important.

(01:23:44)
Again, we need something that everyone can get behind. I think one of the issues with SB 1047, especially the original version of it was it had a bunch of the structure of RSPs, but it also had a bunch of stuff that was either clunky or that just would’ve created a bunch of burdens, a bunch of hassle and might even have missed the target in terms of addressing the risks. You don’t really hear about it on Twitter, you just hear about kind of people are cheering for any regulation. And then the folks who are against make up these often quite intellectually dishonest arguments about how it’ll make us move away from California, bill doesn’t apply if you’re headquartered in California, bill only applies if you do business in California, or that it would damage the open source ecosystem or that it would cause all of these things.

(01:24:43)
I think those were mostly nonsense, but there are better arguments against regulation. There’s one guy, Dean Ball, who’s really, I think, a very scholarly analyst who looks at what happens when a regulation is put in place in ways that they can kind of get a life of their own or how they can be poorly designed. And so our interest has always been we do think there should be regulation in this space, but we want to be an actor who makes sure that that regulation is something that’s surgical, that’s targeted at the serious risks and is something people can actually comply with. Because something I think the advocates of regulation don’t understand as well as they could is if we get something in place that’s poorly targeted, that wastes a bunch of people’s time, what’s going to happen is people are going to say, “See, these safety risks, this is nonsense. I just had to hire 10 lawyers to fill out all these forms. I had to run all these tests for something that was clearly not dangerous.”

(01:25:51)
And after six months of that, there will be a ground swell and we’ll end up with a durable consensus against regulation. And so I think the worst enemy of those who want real accountability is badly designed regulation. We need to actually get it right. And if there’s one thing I could say to the advocates, it would be that I want them to understand this dynamic better and we need to be really careful and we need to talk to people who actually have experience seeing how regulations play out in practice. And the people who have seen that, understand to be very careful. If this was some lesser issue, I might be against regulation at all.

(01:26:32)
But what I want the opponents to understand is that the underlying issues are actually serious. They’re not something that I or the other companies are just making up because of regulatory capture, they’re not sci-fi fantasies, they’re not any of these things. Every time we have a new model, every few months we measure the behavior of these models and they’re getting better and better at these concerning tasks just as they are getting better and better at good, valuable, economically useful tasks. And so I would just love it if some of the former, I think SB 1047 was very polarizing, I would love it if some of the most reasonable opponents and some of the most reasonable proponents would sit down together. And I think that the different AI companies, Anthropic was the only AI company that felt positively in a very detailed way. I think Elon tweeted briefly something positive, but some of the big ones like Google, OpenAI, Meta, Microsoft were pretty staunchly against.

(01:27:49)
So I would really is if some of the key stakeholders, some of the most thoughtful proponents and some of the most thoughtful opponents would sit down and say how do we solve this problem in a way that the proponents feel brings a real reduction in risk and that the opponents feel that it is not hampering the industry or hampering innovation any more necessary than it needs to. I think for whatever reason, that things got too polarized and those two groups didn’t get to sit down in the way that they should. And I feel urgency. I really think we need to do something in 2025. If we get to the end of 2025 and we’ve still done nothing about this, then I’m going to be worried. I’m not worried yet because, again, the risks aren’t here yet, but I think time is running short.
Lex Fridman
(01:28:44)
And come up with something surgical, like you said.
Dario Amodei
(01:28:46)
Yeah, yeah, yeah, exactly. And we need to get away from this intense pro safety versus intense anti-regulatory rhetoric. It’s turned into these flame wars on Twitter and nothing good’s going to come of that.
Lex Fridman
(01:29:04)
So there’s a lot of curiosity about the different players in the game. One of the OGs is OpenAI. You’ve had several years of experience at OpenAI. What’s your story and history there?
Dario Amodei
(01:29:14)
Yeah. So I was at OpenAI for roughly five years. For the last, I think it was couple years, I was vice president of research there. Probably myself and Ilya Sutskever were the ones who really kind of set the research direction. Around 2016 or 2017, I first started to really believe in or at least confirm my belief in the scaling hypothesis when Ilya famously said to me, “The thing you need to understand about these models is they just want to learn. The models just want to learn.” And again, sometimes there are these one sentences, these then cones, that you hear them and you’re like, “Ah, that explains everything. That explains a thousand things that I’ve seen.” And then ever after, I had this visualization in my head of you optimize the models in the right way, you point the models in the right way, they just want to learn. They just want to solve the problem regardless of what the problem is.
Lex Fridman
(01:30:08)
So get out of their way, basically?
Dario Amodei
(01:30:10)
Get out of their way. Yeah.
Lex Fridman
(01:30:11)
Okay.
Dario Amodei
(01:30:11)
Don’t impose your own ideas about how they should learn. And this was the same thing as Rich Sutton put out in the bitter lesson or Gwern put out in the scaling hypothesis. I think generally the dynamic was I got this kind of inspiration from Ilya and from others, folks like Alec Radford, who did the original GPT-1 and then ran really hard with it, me and my collaborators, on GPT-2, GPT-3, RL from Human Feedback, which was an attempt to kind of deal with the early safety and durability, things like debate and amplification, heavy on interpretability. So again, the combination of safety plus scaling. Probably 2018, 2019, 2020, those were kind of the years when myself and my collaborators, probably many of whom became co-founders of Anthropic, kind of really had a vision and drove the direction.
Lex Fridman
(01:31:11)
Why’d you leave? Why’d you decide to leave?
Dario Amodei
(01:31:13)
Yeah, so look, I’m going to put things this way and I think it ties to the race to the top, which is in my time at OpenAI, what I come to see as I’d come to appreciate the scaling hypothesis and as I’d come to appreciate kind of the importance of safety along with the scaling hypothesis. The first one I think OpenAI was getting on board with. The second one in a way had always been part of OpenAI’s messaging. But over many years of the time that I spent there, I think I had a particular vision of how we should handle these things, how we should be brought out in the world, the kind of principles that the organization should have. And look, there were many, many discussions about should the company do this, should the company do that? There’s a bunch of misinformation out there.

(01:32:07)
People say we left because we didn’t like the deal with Microsoft. False. Although, it was like a lot of discussion, a lot of questions about exactly how we do the deal with Microsoft. We left because we didn’t like commercialization. That’s not true. We built GPD-3, which was the model that was commercialized. I was involved in commercialization. It’s more, again, about how do you do it? Civilization is going down this path to very powerful AI. What’s the way to do it? That is cautious, straightforward, honest, that builds trust in the organization and in individuals. How do we get from here to there and how do we have a real vision for how to get it right? How can safety not just be something we say because it helps with recruiting. And I think at the end of the day, if you have a vision for that, forget about anyone else’s vision.

(01:33:01)
I don’t want to talk about anyone else’s vision. If you have a vision for how to do it, you should go off and you should do that vision. It is incredibly unproductive to try and argue with someone else’s vision. You might think they’re not doing it the right way. You might think they’re dishonest. Who knows? Maybe you’re right, maybe you’re not. But what you should do is you should take some people you trust and you should go off together and you should make your vision happen. And if your vision is compelling, if you can make it appeal to people, some combination of ethically in the market, if you can make a company that’s a place people want to join, that engages in practices that people think are reasonable while managing to maintain its position in the ecosystem at the same time, if you do that, people will copy it.

(01:33:52)
And the fact that you are doing it, especially the fact that you’re doing it better than they are, causes them to change their behavior in a much more compelling way than if they’re your boss and you’re arguing with them. I don’t know how to be any more specific about it than that, but I think it’s generally very unproductive to try and get someone else’s vision to look like your vision. It’s much more productive to go off and do a clean experiment and say, “This is our vision, this is how we’re going to do things. Your choice is you can ignore us, you can reject what we’re doing or you can start to become more like us.” And imitation is the sincerest form of flattery. And that plays out in the behavior of customers, that plays out in the behavior of the public, that plays out in the behavior of where people choose to work. And again, at the end, it’s not about one company winning or another company winning.

(01:34:48)
If we or another company are engaging in some practice that people find genuinely appealing, and I want it to be in substance, not just an appearance and I think researchers are sophisticated and they look at substance, and then other companies start copying that practice and they win because they copied that practice. That’s great. That’s success. That’s like the race to the top. It doesn’t matter who wins in the end as long as everyone is copying everyone else’s good practices. One way I think of it is the thing we’re all afraid of is the race to the bottom and the race to the bottom doesn’t matter who wins because we all lose. In the most extreme world, we make this autonomous AI that the robots enslave us or whatever. That’s half joking, but that is the most extreme thing that could happen. Then it doesn’t matter which company was ahead. If instead you create a race to the top where people are competing to engage in good practices, then at the end of the day, it doesn’t matter who ends up winning, it doesn’t even matter who started the race to the top.

(01:35:57)
The point isn’t to be virtuous, the point is to get the system into a better equilibrium than it was before. And individual companies can play some role in doing this. Individual companies can help to start it, can help to accelerate it. And frankly, I think individuals at other companies have done this as well. The individuals that when we put out an RSP react by pushing harder to get something similar done at other companies, sometimes other companies do something that’s we’re like, “Oh, it’s a good practice. We think that’s good. We should adopt it too.” The only difference is I think we try to be more forward leaning. We try and adopt more of these practices first and adopt them more quickly when others invent them. But I think this dynamic is what we should be pointing at and that I think it abstracts away the question of which company’s winning, who trusts who. I think all these questions of drama are profoundly uninteresting and the thing that matters is the ecosystem that we all operate in and how to make that ecosystem better because that constrains all the players.
Lex Fridman
(01:37:06)
And so Anthropic is this kind of clean experiment built on a foundation of what concretely AI safety should look like?
Dario Amodei
(01:37:13)
Well, look, I’m sure we’ve made plenty of mistakes along the way. The perfect organization doesn’t exist. It has to deal with the imperfection of a thousand employees. It has to deal with the imperfection of our leaders, including me. It has to deal with the imperfection of the people we’ve put to oversee the imperfection of the leaders like the board and the long-term benefit trust. It’s all a set of imperfect people trying to aim imperfectly at some ideal that will never perfectly be achieved. That’s what you sign up for. That’s what it will always be.

(01:37:45)
But imperfect doesn’t mean you just give up. There’s better and there’s worse. And hopefully, we can do well enough that we can begin to build some practices that the whole industry engages in. And then my guess is that multiple of these companies will be successful. Anthropic will be successful. These other companies, like ones I’ve been at the past, will also be successful. And some will be more successful than others. That’s less important than, again, that we align the incentives of the industry. And that happens partly through the race to the top, partly through things like RSP, partly through, again, selected surgical regulation.

Hiring a great team

Lex Fridman
(01:38:25)
You said talent density beats talent mass, so can you explain that? Can you expand on that?
Dario Amodei
(01:38:25)
Yeah.
Lex Fridman
(01:38:31)
Can you just talk about what it takes to build a great team of AI researchers and engineers?
Dario Amodei
(01:38:37)
This is one of these statements that’s more true every month. Every month I see this statement as more true than I did the month before. So if I were to do a thought experiment, let’s say you have a team of 100 people that are super smart, motivated and aligned with the mission and that’s your company. Or you can have a team of a thousand people where 200 people are super smart, super aligned with the mission and then 800 people are, let’s just say you pick 800 random big tech employees, which would you rather have? The talent mass is greater in the group of a thousand people. You have even a larger number of incredibly talented, incredibly aligned, incredibly smart people. But the issue is just that if every time someone super talented looks around, they see someone else super talented and super dedicated, that sets the tone for everything. That sets the tone for everyone is super inspired to work at the same place. Everyone trusts everyone else.

(01:39:42)
If you have a thousand or 10,000 people and things have really regressed, you are not able to do selection and you’re choosing random people, what happens is then you need to put a lot of processes and a lot of guardrails in place just because people don’t fully trust each other or you have to adjudicate political battles. There are so many things that slow down the org’s ability to operate. And so we’re nearly a thousand people and we’ve tried to make it so that as large a fraction of those thousand people as possible are super talented, super skilled, it’s one of the reasons we’ve slowed down hiring a lot in the last few months. We grew from 300 to 800, I believe, I think in the first seven, eight months of the year and now we’ve slowed down. The last three months, we went from 800 to 900, 950, something like that. Don’t quote me on the exact numbers, but I think there’s an inflection point around a thousand and we want to be much more careful how we grow.

(01:40:42)
Early on and now as well, we’ve hired a lot of physicists. Theoretical physicists can learn things really fast. Even more recently, as we’ve continued to hire that, we’ve really had a high bar on both the research side and the software engineering side, have hired a lot of senior people, including folks who used to be at other companies in this space, and we’ve just continued to be very selective. It’s very easy to go from a hundred to a thousand, a thousand to 10,000 without paying attention to making sure everyone has a unified purpose. It’s so powerful. If your company consists of a lot of different fiefdoms that all want to do their own thing, they’re all optimizing for their own thing, it’s very hard to get anything done. But if everyone sees the broader purpose of the company, if there’s trust and there’s dedication to doing the right thing, that is a superpower. That in itself I think can overcome almost every other disadvantage.
Lex Fridman
(01:41:41)
And to Steve Jobs, A players. A players want to look around and see other A players is another way of saying that.
Dario Amodei
(01:41:42)
Correct.
Lex Fridman
(01:41:48)
I don’t know what that is about human nature, but it is demotivating to see people who are not obsessively driving towards a singular mission. And it is on the flip side of that, super motivating to see that. It’s interesting. What’s it take to be a great AI researcher or engineer from everything you’ve seen from working with so many amazing people?
Dario Amodei
(01:42:09)
Yeah. I think the number one quality, especially on the research side, but really both, is open mindedness. Sounds easy to be open-minded, right? You’re just like, “Oh, I’m open to anything.” But if I think about my own early history in this scaling hypothesis, I was seeing the same data others were seeing. I don’t think I was a better programmer or better at coming up with research ideas than any of the hundreds of people that I worked with. In some ways, I was worse. I’ve never precise programming of finding the bug, writing the GPU kernels. I could point you to a hundred people here who are better at that than I am.

(01:42:53)
But the thing that I think I did have that was different was that I was just willing to look at something with new eyes. People said, “Oh, we don’t have the right algorithms yet. We haven’t come up with the right way to do things.” And I was just like, “Oh, I don’t know. This neural net has 30 million parameters. What if we gave it 50 million instead? Let’s plot some graphs.” That basic scientific mindset of like, “Oh man,” I see some variable that I could change. What happens when it changes? Let’s try these different things and create a graph. For even, this was the simplest thing in the world, change the number of, this wasn’t PhD level experimental design, this was simple and stupid. Anyone could have done this if you just told them that it was important. It’s also not hard to understand. You didn’t need to be brilliant to come up with this.

(01:43:54)
But you put the two things together and some tiny number of people, some single digit number of people have driven forward the whole field by realizing this. And it’s often like that. If you look back at the discoveries in history, they’re often like that. And so this open-mindedness and this willingness to see with new eyes that often comes from being newer to the field, often experience is a disadvantage for this, that is the most important thing. It’s very hard to look for and test for, but I think it’s the most important thing because when you find something, some really new way of thinking about things, when you have the initiative to do that, it’s absolutely transformative.
Lex Fridman
(01:44:34)
And also be able to do kind of rapid experimentation and, in the face of that, be open-minded and curious and looking at the data with these fresh eyes and seeing what is it that it’s actually saying. That applies in mechanistic interpretability.
Dario Amodei
(01:44:46)
It’s another example of this. Some of the early work and mechanistic interpretability so simple, it’s just no one thought to care about this question before.
Lex Fridman
(01:44:56)
You said what it takes to be a great AI researcher. Can we rewind the clock back, what advice would you give to people interested in AI? They’re young, looking…
Lex Fridman
(01:45:00)
What advice would you give to people interested in AI? They’re young. Looking forward to how can I make an impact on the world?
Dario Amodei
(01:45:06)
I think my number one piece of advice is to just start playing with the models. Actually, I worry a little, this seems like obvious advice now. I think three years ago it wasn’t obvious and people started by, “Oh, let me read the latest reinforcement learning paper.” And you should do that as well, but now with wider availability of models and APIs, people are doing this more. But, I think just experiential knowledge. These models are new artifacts that no one really understands and so getting experience playing with them. I would also say again, in line with the do something new, think in some new direction, there are all these things that haven’t been explored. For example, mechanistic interpretability is still very new. It’s probably better to work on that than it is to work on new model architectures, because it’s more popular than it was before. There are probably 100 people working on it, but there aren’t like 10,000 people working on it.

(01:46:07)
And it’s just this fertile area for study. There’s so much low-hanging fruit, you can just walk by and you can pick things. For whatever reason, people aren’t interested in it enough. I think there are some things around long horizon learning and long horizon tasks, where there’s a lot to be done. I think evaluations, we’re still very early in our ability to study evaluations, particularly for dynamic systems acting in the world. I think there’s some stuff around multi-agent. Skate where the puck is going is my advice, and you don’t have to be brilliant to think of it. All the things that are going to be exciting in five years, people even mention them as conventional wisdom, but it’s just somehow there’s this barrier that people don’t double down as much as they could, or they’re afraid to do something that’s not the popular thing. I don’t know why it happens, but getting over that barrier, that’s my number one piece of advice.

Post-training

Lex Fridman
(01:47:14)
Let’s talk if we could a bit about post-training. So it seems that the modern post-training recipe has a little bit of everything. So supervised fine-tuning, RLHF, the constitutional AI with RLAIF-
Dario Amodei
(01:47:32)
Best acronym.
Lex Fridman
(01:47:33)
It’s the, again, that naming thing. And then synthetic data. Seems like a lot of synthetic data, or at least trying to figure out ways to have high quality synthetic data. So if this is a secret sauce that makes Anthropic clause so incredible, how much of the magic is in the pre-training? How much of it is in the post-training?
Dario Amodei
(01:47:54)
Yeah. So first of all, we’re not perfectly able to measure that ourselves. When you see some great character ability, sometimes it’s hard to tell whether it came from pre-training or post-training. We developed ways to try and distinguish between those two, but they’re not perfect. The second thing I would say is, when there is an advantage and I think we’ve been pretty good in general at RL, perhaps the best, although I don’t know, I don’t see what goes on inside other companies. Usually it isn’t, “Oh my God, we have this secret magic method that others don’t have.” Usually it’s like, “Well, we got better at the infrastructure so we could run it for longer,” or, “We were able to get higher quality data,” or, “We were able to filter our data better, or “We were able to combine these methods and practice.”

(01:48:41)
It’s usually some boring matter of practice and trade craft. So when I think about how to do something special in terms of how we train these models both, but even more so I really think of it a little more, again, as designing airplanes or cars. It’s not just like, “Oh, man. I have the blueprint.” Maybe that makes you make the next airplane. But there’s some cultural trade craft of how we think about the design process that I think is more important than any particular gizmo we’re able to invent.
Lex Fridman
(01:49:17)
Okay. Well, let me ask you about specific techniques. So first on RLHF, what do you think, just zooming out intuition, almost philosophy … Why do you think RLHF works so well?
Dario Amodei
(01:49:28)
If I go back to the scaling hypothesis, one of the ways to skate the scaling hypothesis is, if you train for X and you throw enough compute at it, then you get X. And so RLHF is good at doing what humans want the model to do, or at least to state it more precisely doing what humans who look at the model for a brief period of time and consider different possible responses, what they prefer as the response, which is not perfect from both the safety and capabilities perspective, in that humans are often not able to perfectly identify what the model wants and what humans want in the moment may not be what they want in the long term.

(01:50:05)
So there’s a lot of subtlety there, but the models are good at producing what the humans in some shallow sense want. And it actually turns out that you don’t even have to throw that much compute at it, because of another thing, which is this thing about a strong pre-trained model being halfway to anywhere. So once you have the pre-trained model, you have all the representations you need to get the model where you want it to go.
Lex Fridman
(01:50:32)
So do you think RLHF makes the model smarter, or just appear smarter to the humans?
Dario Amodei
(01:50:41)
I don’t think it makes the model smarter. I don’t think it just makes the model appear smarter. It’s like RLHF bridges the gap between the human and the model. I could have something really smart that can’t communicate at all. We all know people like this, people who are really smart but can’t understand what they’re saying. So I think RLHF just bridges that gap. I think it’s not the only kind of RL we do. It’s not the only kind of RL that will happen in the future. I think RL has the potential to make models smarter, to make them reason better, to make them operate better, to make them develop new skills even. And perhaps that could be done even in some cases with human feedback. But, the kind of RLHF we do today mostly doesn’t do that yet, although we’re very quickly starting to be able to.
Lex Fridman
(01:51:30)
But if you look at the metric of helpfulness, it increases that?
Dario Amodei
(01:51:36)
Yes. It also increases, what was this word in Leopold’s essay, “unhobbling,” where basically the models are hobbled and then you do various trainings to them to unhobble them. So I like that word, because it’s a rare word. So I think RLHF unhobbles the models in some ways. And then there are other ways where that model hasn’t yet been unhobbled and needs to unhobble.
Lex Fridman
(01:51:58)
If you can say in terms of cost, is pre-training the most expensive thing? Or is post-training creep up to that?
Dario Amodei
(01:52:05)
At the present moment, it is still the case that pre-training is the majority of the cost. I don’t know what to expect in the future, but I could certainly anticipate a future where post-training is the majority of the cost.
Lex Fridman
(01:52:16)
In that future you anticipate, would it be the humans or the AI that’s the costly thing for the post-training?
Dario Amodei
(01:52:22)
I don’t think you can scale up humans enough to get high quality. Any kind of method that relies on humans and uses a large amount of compute, it’s going to have to rely on some scaled supervision method, like debate or iterated amplification or something like that.

Constitutional AI

Lex Fridman
(01:52:39)
So on that super interesting set of ideas around constitutional AI, can you describe what it is as first detailed in December 2022 paper and beyond that. What is it?
Dario Amodei
(01:52:53)
Yes. So this was from two years ago. The basic idea is, so we describe what RLHF is. You have a model and you just sample from it twice. It spits out two possible responses, and you’re like, “Human, which responses do you like better?” Or another variant of it is, “Rate this response on a scale of one to seven.” So that’s hard because you need to scale up human interaction and it’s very implicit. I don’t have a sense of what I want the model to do. I just have a sense of what this average of 1,000 humans wants the model to do. So two ideas. One is, could the AI system itself decide which response is better? Could you show the AI system these two responses and ask which response is better? And then second, well, what criterion should the AI use?

(01:53:43)
And so then there’s this idea, you have a single document, a constitution if you will, that says, these are the principles the model should be using to respond. And the AI system reads those reads principles as well as reading the environment and the response. And it says, “Well, how good did the AI model do?” It’s basically a form of self-play. You’re training the model against itself. And so the AI gives the response and then you feed that back into what’s called the preference model, which in turn feeds the model to make it better. So you have this triangle of the AI, the preference model, and the improvement of the AI itself.
Lex Fridman
(01:54:22)
And we should say that in the constitution, the set of principles are human interpretable. They’re-
Dario Amodei
(01:54:27)
Yeah. Yeah. It’s something both the human and the AI system can read. So it has this nice translatability or symmetry. In practice, we both use a model constitution and we use RLHF and we use some of these other methods. So it’s turned into one tool in a toolkit, that both reduces the need for RLHF and increases the value we get from using each data point of RLHF. It also interacts in interesting ways with future reasoning type RL methods. So it’s one tool in the toolkit, but I think it is a very important tool.
Lex Fridman
(01:55:05)
Well, it’s a compelling one to us humans. Thinking about the founding fathers and the founding of the United States. The natural question is who and how do you think it gets to define the constitution, the set of principles in the constitution?
Dario Amodei
(01:55:20)
Yeah. So I’ll give a practical answer and a more abstract answer. I think the practical answer is look in practice, models get used by all kinds of different customers. And so you can have this idea where the model can have specialized rules or principles. We fine tune versions of models implicitly. We’ve talked about doing it explicitly having special principles that people can build into the models. So from a practical perspective, the answer can be very different from different people. A customer service agent behaves very differently from a lawyer and obeys different principles.

(01:55:57)
But, I think at the base of it, there are specific principles that models have to obey. I think a lot of them are things that people would agree with. Everyone agrees that we don’t want models to present these CBRN risks. I think we can go a little further and agree with some basic principles of democracy and the rule of law. Beyond that, it gets very uncertain and there our goal is generally for the models to be more neutral, to not espouse a particular point of view and more just be wise agents or advisors that will help you think things through and will present possible considerations. But don’t express strong or specific opinions.
Lex Fridman
(01:56:42)
OpenAI released a model spec where it clearly, concretely defines some of the goals of the model and specific examples like AB, how the model should behave. Do you find that interesting? By the way I should mention, I believe the brilliant John Schulman was a part of that. He’s now at Anthropic. Do you think this is a useful direction? Might Anthropic release a model spec as well?
Dario Amodei
(01:57:05)
Yeah. So I think that’s a pretty useful direction. Again, it has a lot in common with constitutional AI. So again, another example of a race to the top. We have something that we think a better and more responsible way of doing things. It’s also a competitive advantage. Then others discover that it has advantages and then start to do that thing. We then no longer have the competitive advantage, but it’s good from the perspective that now everyone has adopted a positive practice that others were not adopting. And so our response to that is, “Well, looks like we need a new competitive advantage in order to keep driving this race upwards.” So that’s how I generally feel about that. I also think every implementation of these things is different. So there were some things in the model spec that were not in constitutional AI, and so we can always adopt those things or at least learn from them. So again, I think this is an example of the positive dynamic that I think we should all want the field to have.

Machines of Loving Grace

Lex Fridman
(01:58:06)
Let’s talk about the incredible essay Machines of Loving Grace. I recommend everybody read it. It’s a long one.
Dario Amodei
(01:58:12)
It is rather long.
Lex Fridman
(01:58:13)
Yeah. It’s really refreshing to read concrete ideas about what a positive future looks like. And you took a bold stance because it’s very possible that you might be wrong on the dates or the specific applications-
Dario Amodei
(01:58:24)
Oh, yeah. I’m fully expecting to well, definitely be wrong about all the details. I might be just spectacularly wrong about the whole thing and people will laugh at me for years. That’s just how the future works.
Lex Fridman
(01:58:40)
So you provided a bunch of concrete positive impacts of AI and how exactly a super intelligent AI might accelerate the rate of breakthroughs in, for example, biology and chemistry, that would then lead to things like we cure most cancers, prevent all infectious disease, double the human lifespan and so on. So let’s talk about this essay first. Can you give a high-level vision of this essay? And what are the key takeaways that people have?
Dario Amodei
(01:59:08)
Yeah. I have spent a lot of time, and in Anthropic has spent a lot of effort on how do we address the risks of AI? How do we think about those risks? We’re trying to do a race to the top, what that requires us to build all these capabilities and the capabilities are cool. But, a big part of what we’re trying to do is address the risks. And the justification for that is like, well, all these positive things, the market is this very healthy organism. It’s going to produce all the positive things. The risks? I don’t know, we might mitigate them, we might not. And so we can have more impact by trying to mitigate the risks.

(01:59:46)
But, I noticed that one flaw in that way of thinking, and it’s not a change in how seriously I take the risks. It’s maybe a change in how I talk about them, is that no matter how logical or rational, that line of reasoning that I just gave might be. If you only talk about risks, your brain only thinks about risks. And so, I think it’s actually very important to understand, what if things do go well? And the whole reason we’re trying to prevent these risks is not because we’re afraid of technology, not because we want to slow it down. It’s because if we can get to the other side of these risks, if we can run the gauntlet successfully, to put it in stark terms, then on the other side of the gauntlet are all these great things.

(02:00:36)
And these things are worth fighting for. And these things can really inspire people. And I think I imagine, because … Look, you have all these investors, all these VCs, all these AI companies talking about all the positive benefits of AI. But as you point out, it’s weird. There’s actually a dearth of really getting specific about it. There’s a lot of random people on Twitter posting these gleaming cities and this just vibe of grind, accelerate harder, kick out the … It’s just this very aggressive ideological. But then you’re like, “Well, what are you actually excited about?”

(02:01:17)
And so, I figured that I think it would be interesting and valuable for someone who’s actually coming from the risk side to try and really make a try at explaining what the benefits are, both because I think it’s something we can all get behind and I want people to understand. I want them to really understand that this isn’t Doomers versus Accelerationists. This is that, if you have a true understanding of where things are going with AI, and maybe that’s the more important axis, AI is moving fast versus AI is not moving fast, then you really appreciate the benefits and you really want humanity or civilization to seize those benefits. But, you also get very serious about anything that could derail them.
Lex Fridman
(02:02:09)
So I think the starting point is to talk about what this Powerful AI, which is the term you like to use, most of the world uses AGI, but you don’t like the term, because it’s basically has too much baggage, it’s become meaningless. It’s like we’re stuck with the terms whether we like them or not.
Dario Amodei
(02:02:26)
Maybe we’re stuck with the terms and my efforts to change them are futile.
Lex Fridman
(02:02:29)
It’s admirable.
Dario Amodei
(02:02:29)
I’ll tell you what else I don’t … This is a pointless semantic point, but I keep talking about it-
Lex Fridman
(02:02:35)
It’s back to naming again.
Dario Amodei
(02:02:36)
I’m just going to do it once more. I think it’s a little like, let’s say it was like 1995 and Moore’s law is making the computers faster. And for some reason there had been this verbal tick that everyone was like, “Well, someday we’re going to have supercomputers. And supercomputers are going to be able to do all these things that … Once we have supercomputers, we’ll be able to sequence the genome, we’ll be able to do other things.” And so. One, it’s true, the computers are getting faster and as they get faster, they’re going to be able to do all these great things. But there’s, there’s no discrete point at which you had a supercomputer and previous computers were no. “Supercomputer” is a term we use, but it’s a vague term to just describe computers that are faster than what we have today.

(02:03:19)
There’s no point at which you pass the threshold and you’re like, “Oh, my God! We’re doing a totally new type of computation and new … And so I feel that way about AGI. There’s just a smooth exponential. And if by AGI you mean AI is getting better and better, and gradually it’s going to do more and more of what humans do until it’s going to be smarter than humans, and then it’s going to get smarter even from there, then yes, I believe in AGI. But, if AGI is some discrete or separate thing, which is the way people often talk about it, then it’s a meaningless buzzword.
Lex Fridman
(02:03:50)
To me, it’s just a platonic form of a powerful AI, exactly how you define it. You define it very nicely, so on the intelligence axis, it’s just on pure intelligence, it’s smarter than a Nobel Prize winner as you describe across most relevant disciplines. So okay, that’s just intelligence. So it’s both in creativity and be able to generate new ideas, all that kind of stuff in every discipline, Nobel Prize winner in their prime. It can use every modality, so this is self-explanatory, but just operate across all the modalities of the world.

(02:04:28)
It can go off for many hours, days and weeks to do tasks and do its own detailed planning and only ask you help when it’s needed. This is actually interesting. I think in the essay you said … Again, it’s a bet that it’s not going to be embodied, but it can control embodied tools. So it can control tools, robots, laboratory equipment., the resource used to train it can then be repurposed to run millions of copies of it, and each of those copies would be independent that could do their own independent work. So you can do the cloning of the intelligence systems.
Dario Amodei
(02:05:03)
Yeah. Yeah. You might imagine from outside the field that there’s only one of these, right? You’ve only made one. But the truth is that the scale up is very quick. We do this today,. We make a model, and then we deploy thousands, maybe tens of thousands of instances of it. I think by the time, certainly within two to three years, whether we have these super powerful AIs or not, clusters are going to get to the size where you’ll be able to deploy millions of these. And they’ll be faster than humans. And so, if your picture is, “Oh, we’ll have one and it’ll take a while to make them,” my point there was, no. Actually you have millions of them right away.
Lex Fridman
(02:05:37)
And in general they can learn and act 10 to 100 times faster than humans. So that’s a really nice definition of powerful AI. Okay, so that. But, you also write that, “Clearly such an entity would be capable of solving very difficult problems very fast, but it is not trivial to figure out how fast. Two “extreme” positions both seem false to me.” So the singularity is on the one extreme and the opposite and the other extreme. Can you describe each of the extremes?
Dario Amodei
(02:06:05)
Yeah.
Lex Fridman
(02:06:06)
So why?
Dario Amodei
(02:06:06)
So yeah. Let’s describe the extreme. So one extreme would be, “Well, look. If we look at evolutionary history like there was this big acceleration, where for hundreds of thousands of years we just had single-celled organisms, and then we had mammals, and then we had apes. And then that quickly turned to humans. Humans quickly built industrial civilization.” And so, this is going to keep speeding up and there’s no ceiling at the human level. Once models get much, much smarter than humans, they’ll get really good at building the next models. And if you write down a simple differential equation, like this is an exponential … And so what’s going to happen is that models will build faster models. Models will build faster models. And those models will build nanobots that can take over the world and produce much more energy than you could produce otherwise. And so, if you just kind of solve this abstract differential equation, then like five days after we build the first AI that’s more powerful than humans, then the world will be filled with these AIs in every possible technology that could be invented, like will be invented.

(02:07:12)
I’m caricaturing this a little bit, but I think that’s one extreme. And the reason that I think that’s not the case is that, one, I think they just neglect the laws of physics. It’s only possible to do things so fast in the physical world. Some of those loops go through producing faster hardware. It takes a long time to produce faster hardware. Things take a long time. There’s this issue of complexity. I think no matter how smart you are, people talk about, “Oh, we can make models of biological systems that’ll do everything the biological systems … ” Look, I think computational modeling can do a lot. I did a lot of computational modeling when I worked in biology. But just there are a lot of things that you can’t predict how … They’re complex enough that just iterating, just running the experiment is going to beat any modeling, no matter how smart the system doing the modeling is.
Lex Fridman
(02:08:08)
Well, even if it’s not interacting with the physical world, just the modeling is going to be hard?
Dario Amodei
(02:08:12)
Yeah. Well, the modeling is going to be hard and getting the model to match the physical world is going to be
Lex Fridman
(02:08:18)
All right. So it does have to interact with the physical world to verify.
Dario Amodei
(02:08:21)
But you just look at even the simplest problems. I think I talk about The Three-Body Problem or simple chaotic prediction, or predicting the economy. It’s really hard to predict the economy two years out. Maybe the case is humans can predict what’s going to happen in the economy next quarter, or they can’t really do that. Maybe a AI that’s a zillion times smarter can only predict it out a year or something, instead of … You have these exponential increase in computer intelligence for linear increase in ability to predict. Same with again, like biological molecules interacting. You don’t know what’s going to happen when you perturb a complex system. You can find simple parts in it, if you’re smarter, you’re better at finding these simple parts. And then I think human institutions, human institutions are really difficult. It’s been a hard to get people.

(02:09:22)
I won’t give specific examples, but it’s been hard to get people to adopt even the technologies that we’ve developed, even ones where the case for their efficacy is very, very strong. People have concerns. They think things are conspiracy theories. It’s just been very difficult. It’s also been very difficult to get very simple things through the regulatory system. And I don’t want to disparage anyone who works in regulatory systems of any technology. There are hard they have to deal with. They have to save lives. But the system as a whole, I think makes some obvious trade-offs that are very far from maximizing human welfare. And so, if we bring AI systems into these human systems, often the level of intelligence may just not be the limiting factor. It just may be that it takes a long time to do something. Now, if the AI system circumvented all governments, if it just said, “I’m dictator of the world and I’m going to do whatever,” some of these things it could do.

(02:10:33)
Again, the things have to do with complexity. I still think a lot of things would take a while. I don’t think it helps that the AI systems can produce a lot of energy or go to the moon. Like some people in comments responded to the essay saying the AI system can produce a lot of energy and smarter AI systems. That’s missing the point. That kind of cycle doesn’t solve the key problems that I’m talking about here. So I think a bunch of people missed the point there. But even if it were completely unaligned and could get around all these human obstacles it would have trouble.

(02:11:04)
But again, if you want this to be an AI system that doesn’t take over the world, that doesn’t destroy humanity, then basically it’s going to need to follow basic human laws. If we want to have an actually good world, we’re going to have to have an AI system that interacts with humans, not one that creates its own legal system, or disregards all the laws or all of that. So as inefficient as these processes are, we’re going to have to deal with them, because there needs to be some popular and democratic legitimacy in how these systems are rolled out. We can’t have a small group of people who are developing these systems say, “This is what’s best for everyone.” I think it’s wrong, and I think in practice it’s not going to work anyway. So you put all those things together and we’re not going change the world and upload everyone in five minutes. A, I don’t think it’s going to happen and B, to the extent that it could happen.,It’s not the way to lead to a good world. So that’s on one side.

(02:12:07)
On the other side, there’s another set of perspectives, which I have actually in some ways more sympathy for, which is, look, we’ve seen big productivity increases before. Economists are familiar with studying the productivity increases that came from the computer revolution and internet revolution. And generally those productivity increases were underwhelming. They were less than you might imagine. There was a quote from Robert Solow, “You see the computer revolution everywhere except the productivity statistics.” So why is this the case? People point to the structure of firms, the structure of enterprises, how slow it’s been to roll out our existing technology to very poor parts of the world, which I talk about in the essay. How do we get these technologies to the poorest parts of the world that are behind on cell phone technology, computers, medicine, let alone newfangled AI that hasn’t been invented yet.

(02:13:04)
So you could have a perspective that’s like, “Well, this is amazing technically, but it’s all or nothing burger. I think Tyler Cowen who wrote something in response to my essay has that perspective. I think he thinks the radical change will happen eventually, but he thinks it’ll take 50 or 100 years. And you could have even more static perspectives on the whole thing. I think there’s some truth to it. I think the time scale is just too long and I can see it. I can actually see both sides with today’s AI. So a lot of our customers are large enterprises who are used to doing things a certain way. I’ve also seen it in talking to governments, right? Those are prototypical institutions, entities that are slow to change. But, the dynamic I see over and over again is yes, it takes a long time to move the ship. Yes. There’s a lot of resistance and lack of understanding.

(02:13:58)
But, the thing that makes me feel that progress will in the end happen moderately fast, not incredibly fast, but moderately fast, is that you talk to … What I find is I find over and over again, again in large companies, even in governments which have been actually surprisingly forward leaning, you find two things that move things forward. One, you find a small fraction of people within a company, within a government, who really see the big picture, who see the whole scaling hypothesis, who understand where AI is going, or at least understand where it’s going within their industry. And there are a few people like that within the current US government who really see the whole picture. And those people see that this is the most important thing in the world until they agitate for it. And the thing they alone are not enough to succeed, because there are a small set of people within a large organization.

(02:14:51)
But, as the technology starts to roll out, as it succeeds in some places in the folks who are most willing to adopt it, the specter of competition gives them a wind at their backs, because they can point within their large organization. They can say, “Look, these other guys are doing this.” One bank can say, “Look, this newfangled hedge fund is doing this thing. They’re going to eat our lunch.” In the US, we can say we’re afraid China’s going to get there before we are. And that combination, the specter of competition plus a few visionaries within these, the organizations that in many ways are sclerotic, you put those two things together and it actually makes something happen. It’s interesting. It’s a balanced fight between the two, because inertia is very powerful, but eventually over enough time, the innovative approach breaks through.

(02:15:48)
And I’ve seen that happen. I’ve seen the arc of that over and over again, and it’s like the barriers are there, the barriers to progress, the complexity, not knowing how to use the model, how to deploy them are there. And for a bit it seems like they’re going to last forever, change doesn’t happen. But, then eventually change happens and always comes from a few people. I felt the same way when I was an advocate of the scaling hypothesis within the AI field itself and others didn’t get it. It felt like no one would ever get it. Then it felt like we had a secret almost no one ever had. And then, a couple years later, everyone has the secret. And so, I think that’s how it’s going to go with deployment AI in the world. The barriers are going to fall apart gradually and then all at once.

(02:16:35)
And so, I think this is going to be more, and this is just an instinct. I could easily see how I’m wrong. I think it’s going to be more five or 10 years, as I say in the essay than it’s going to be 50 or 100 years. I also think it’s going to be five or 10 years more than it’s going to be five or 10 hours, because I’ve just seen how human systems work. And I think a lot of these people who write down these differential equations, who say AI is going to make more powerful AI, who can’t understand how it could possibly be the case that these things won’t change so fast. I think they don’t understand these things.

AGI timeline

Lex Fridman
(02:17:11)
So what to you is the timeline to where we achieve AGI, A.K.A. powerful AI, A.K.A. super useful AI?
Dario Amodei
(02:17:22)
I’m going to start calling it that.
Lex Fridman
(02:17:24)
It’s a debate about naming. On pure intelligence smarter than a Nobel Prize winner in every relevant discipline and all the things we’ve said. Modality, can go and do stuff on its own for days, weeks, and do biology experiments on its own in one … You know what? Let’s just stick to biology, because you sold me on the whole biology and health section. And that’s so exciting from just … I was getting giddy from a scientific perspective. It made me want to be a biologist.
Dario Amodei
(02:17:56)
So no,. No. This was the feeling I had when I was writing it, that it’s like, this would be such a beautiful future if we can just make it happen. If we can just get the landmines out of the way and make it happen. There’s so much beauty and elegance and moral force behind it if we can just … And it’s something we should all be able to agree on. As much as we fight about all these political questions, is this something that could actually bring us together? But you were asking when will we get this?
Lex Fridman
(02:18:32)
When? When do you think? Just putting numbers on the table.
Dario Amodei
(02:18:36)
This is, of course, the thing I’ve been grappling with for many years, and I’m not at all confident. If I say 2026 or 2027, there will be a zillion people on Twitter who will be like, “AI CEO said 2026, 2020 … ” and it’ll be repeated for the next two years that this is definitely when I think it’s going to happen. So whoever’s exerting these clips will crop out the thing I just said and only say the thing I’m about to say. But I’ll just say it anyway-
Lex Fridman
(02:19:06)
Have fun with it.
Dario Amodei
(02:19:08)
So if you extrapolate the curves that we’ve had so far. Right? If you say, “Well, I don’t know. We’re starting to get to PhD level, and last year we were at undergraduate level and the year before we were at the level of a high school student.” Again, you can quibble with at what tasks and for what we’re still missing modalities, but those are being added. Computer use was added, like ImageEn was added, image generation has been added. And this is totally unscientific, but if you just eyeball the rate at which these capabilities are increasing, it does make you think that we’ll get there by 2026 or 2027. Again, lots of things could derail it. We could run out of data. We might not be able to scale clusters as much as we want. Maybe Taiwan gets blown up or something, and then we can’t produce as many GPUs as we want. So there are all-
Dario Amodei
(02:20:00)
Then we can’t produce as many GPUs as we want. So there are all kinds of things that could derail the whole process. So I don’t fully believe the straight line extrapolation, but if you believe the straight line extrapolation, we’ll get there in 2026 or 2027. I think the most likely is that there are some mild delay relative to that. I don’t know what that delay is, but I think it could happen on schedule. I think there could be a mild delay. I think there are still worlds where it doesn’t happen in a hundred years. The number of those worlds is rapidly decreasing. We are rapidly running out of truly convincing blockers, truly compelling reasons why this will not happen in the next few years.

(02:20:39)
There were a lot more in 2020, although my guess, my hunch at that time was that we’ll make it through all those blockers. So sitting as someone who has seen most of the blockers cleared out of the way, I suspect, my hunch, my suspicion is that the rest of them will not block us. But look, at the end of the day, I don’t want to represent this as a scientific prediction. People call them scaling laws. That’s a misnomer. Like Moore’s law is a misnomer. Moore’s laws, scaling laws, they’re not laws of the universe. They’re empirical regularities. I am going to bet in favor of them continuing, but I’m not certain of that.
Lex Fridman
(02:21:15)
So you extensively described sort of the compressed 21st century, how AGI will help set forth a chain of breakthroughs in biology and medicine that help us in all these kinds of ways that I mentioned. What are the early steps it might do? And by the way, I asked Claude good questions to ask you and Claude told me to ask, what do you think is a typical day for a biologist working on AGI look like in this future?
Dario Amodei
(02:21:45)
Yeah, yeah.
Lex Fridman
(02:21:46)
Claude is curious.
Dario Amodei
(02:21:48)
Well, let me start with your first questions and then I’ll answer that. Claude wants to know what’s in his future, right?
Lex Fridman
(02:21:52)
Exactly.
Dario Amodei
(02:21:54)
Who am I going to be working with?
Lex Fridman
(02:21:55)
Exactly.
Dario Amodei
(02:21:56)
So I think one of the things when I went hard on in the essay is let me go back to this idea of, because it’s really had an impact on me, this idea that within large organizations and systems, there end up being a few people or a few new ideas who cause things to go in a different direction than they would’ve before who kind of disproportionately affect the trajectory. There’s a bunch of the same thing going on, right? If you think about the health world, there’s like trillions of dollars to pay out Medicare and other health insurance and then the NIH is 100 billion. And then if I think of the few things that have really revolutionized anything, it could be encapsulated in a small fraction of that. And so when I think of where will AI have an impact, I’m like, “Can AI turn that small fraction into a much larger fraction and raise its quality?”

(02:22:49)
And within biology, my experience within biology is that the biggest problem of biology is that you can’t see what’s going on. You have very little ability to see what’s going on and even less ability to change it, right? What you have is this. From this, you have to infer that there’s a bunch of cells that within each cell is 3 billion base pairs of DNA built according to a genetic code. And there are all these processes that are just going on without any ability of us on unaugmented humans to affect it. These cells are dividing. Most of the time that’s healthy, but sometimes that process goes wrong and that’s cancer. The cells are aging, your skin may change color, develops wrinkles as you age, and all of this is determined by these processes. All these proteins being produced, transported to various parts of the cells binding to each other.

(02:23:50)
And in our initial state about biology, we didn’t even know that these cells existed. We had to invent microscopes to observe the cells. We had to invent more powerful microscopes to see below the level of the cell to the level of molecules. We had to invent X-ray crystallography to see the DNA. We had to invent gene sequencing to read the DNA. Now we had to invent protein folding technology to predict how it would fold and how these things bind to each other. We had to invent various techniques for now we can edit the DNA as of with CRISPR as of the last 12 years. So the whole history of biology, a whole big part of the history is basically our ability to read and understand what’s going on and our ability to reach in and selectively change things. And my view is that there’s so much more we can still do there.

(02:24:48)
You can do CRISPR, but you can do it for your whole body. Let’s say I want to do it for one particular type of cell and I want the rate of targeting the wrong cell to be very low. That’s still a challenge. That’s still things people are working on. That’s what we might need for gene therapy for certain diseases. The reason I’m saying all of this, it goes beyond this to gene sequencing, to new types of nanomaterials for observing what’s going on inside cells, for antibody drug conjugates. The reason I’m saying all this is that this could be a leverage point for the AI systems, right? That the number of such inventions, it’s in the mid double digits or something, mid double digits, maybe low triple digits over the history of biology. Let’s say I have a million of these AIs like can they discover a thousand working together or can they discover thousands of these very quickly and does that provide a huge lever?

(02:25:45)
Instead of trying to leverage two trillion a year we spend on Medicare or whatever, can we leverage the 1 billion a year that’s spent to discover but with much higher quality? And so what is it like being a scientist that works with an AI system? The way I think about it actually is, well, so I think in the early stages, the AIs are going to be like grad students. You’re going to give them a project. You’re going to say, “I’m the experienced biologist. I’ve set up the lab.” The biology professor or even the grad students themselves will say, “Here’s what you can do with an AI… AI system, I’d like to study this.” And the AI system, it has all the tools. It can look up all the literature to decide what to do. It can look at all the equipment. It can go to a website and say, “Hey, I’m going to go to Thermo Fisher or whatever the dominant lab equipment company is today. My time was Thermo Fisher.

(02:26:48)
I’m going to order this new equipment to do this. I’m going to run my experiments. I’m going to write up a report about my experiments. I’m going to inspect the images for contamination. I’m going to decide what the next experiment is. I’m going to write some code and run a statistical analysis. All the things a grad student would do that’ll be a computer with an AI that the professor talks to every once in a while and it says, “This is what you’re going to do today.” The AI system comes to it with questions. When it’s necessary to run the lab equipment, it may be limited in some ways. It may have to hire a human lab assistant to do the experiment and explain how to do it or it could use advances in lab automation that are gradually being developed or have been developed over the last decade or so and will continue to be developed.

(02:27:38)
And so it’ll look like there’s a human professor and 1,000 AI grad students and if you go to one of these Nobel Prize winning biologists or so, you’ll say, “Okay, well, you had like 50 grad students. Well, now you have 1,000 and they’re smarter than you are by the way.” Then I think at some point it’ll flip around where the AI systems will be the PIs, will be the leaders, and they’ll be ordering humans or other AI systems around. So I think that’s how it’ll work on the research side.
Lex Fridman
(02:28:06)
And there would be the inventors of a CRISPR type technology.
Dario Amodei
(02:28:08)
They would be the inventors of a CRISPR type technology. And then I think, as I say in the essay, we’ll want to turn, probably turning loose is the wrong term, but we’ll want to harness the AI systems to improve the clinical trial system as well. There’s some amount of this that’s regulatory, that’s a matter of societal decisions and that’ll be harder. But can we get better at predicting the results of clinical trials? Can we get better at statistical design so that clinical trials that used to require 5,000 people and therefore needed $100 million in a year to enroll them, now they need 500 people in two months to enroll them? That’s where we should start. And can we increase the success rate of clinical trials by doing things in animal trials that we used to do in clinical trials and doing things in simulations that we used to do in animal trials? Again, we won’t be able to simulate at all. AI is not God, but can we shift the curve substantially and radically? So I don’t know, that would be my picture.
Lex Fridman
(02:29:15)
Doing in vitro and doing it. I mean you’re still slowed down. It still takes time, but you can do it much, much faster.
Dario Amodei
(02:29:21)
Yeah, yeah. Can we just one step at a time and can that add up to a lot of steps? Even though though we still need clinical trials, even though we still need laws, even though the FDA and other organizations will still not be perfect, can we just move everything in a positive direction and when you add up all those positive directions, do you get everything that was going to happen from here to 2100 instead happens from 2027 to 2032 or something?

Programming

Lex Fridman
(02:29:46)
Another way that I think the world might be changing with AI even today, but moving towards this future of the powerful super useful AI is programming. So how do you see the nature of programming because it’s so intimate to the actual act of building AI. How do you see that changing for us humans?
Dario Amodei
(02:30:09)
I think that’s going to be one of the areas that changes fastest for two reasons. One, programming is a skill that’s very close to the actual building of the AI. So the farther a skill is from the people who are building the AI, the longer it’s going to take to get disrupted by the AI. I truly believe that AI will disrupt agriculture. Maybe it already has in some ways, but that’s just very distant from the folks who are building AI, and so I think it’s going to take longer. But programming is the bread and butter of a large fraction of the employees who work at Anthropic and at the other companies, and so it’s going to happen fast. The other reason it’s going to happen fast is with programming, you close the loop both when you’re training the model and when you’re applying the model.

(02:30:52)
The idea that the model can write the code means that the model can then run the code and then see the results and interpret it back. And so it really has an ability unlike hardware, unlike biology, which we just discussed, the model has an ability to close the loop. And so I think those two things are going to lead to the model getting good at programming very fast. As I saw on typical real-world programming tasks, models have gone from 3% in January of this year to 50% in October of this year. So we’re on that S-curve where it’s going to start slowing down soon because you can only get to 100%. But I would guess that in another 10 months, we’ll probably get pretty close. We’ll be at least 90%. So again, I would guess, I don’t know how long it’ll take, but I would guess again, 2026, 2027 Twitter people who crop out these numbers and get rid of the caveats, I don’t know.

(02:31:53)
I don’t like you, go away. I would guess that the kind of task that the vast majority of coders do, AI can probably, if we make the task very narrow, just write code, AI systems will be able to do that. Now that said, I think comparative advantage is powerful. We’ll find that when AIs can do 80% of a coder’s job, including most of it that’s literally write code with a given spec, we’ll find that the remaining parts of the job become more leveraged for humans, right? Humans, there’ll be more about high level system design or looking at the app and is it architected well and the design and UX aspects and eventually AI will be able to do those as well. That’s my vision of the powerful AI system. But I think for much longer than we might expect, we will see that small parts of the job that humans still do will expand to fill their entire job in order for the overall productivity to go up. That’s something we’ve seen. It used to be that writing and editing letters was very difficult and writing the print was difficult. Well, as soon as you had word processors and then computers and it became easy to produce work and easy to share it, then that became instant and all the focus was on the ideas. So this logic of comparative advantage that expands tiny parts of the tasks to large parts of the tasks and creates new tasks in order to expand productivity, I think that’s going to be the case.

(02:33:32)
Again, someday AI will be better at everything and that logic won’t apply, and then humanity will have to think about how to collectively deal with that and we’re thinking about that every day and that’s another one of the grand problems to deal with aside from misuse and autonomy and we should take it very seriously. But I think in the near term, and maybe even in the medium term, medium term like 2, 3, 4 years, I expect that humans will continue to have a huge role and the nature of programming will change, but programming as a role, programming as a job will not change. It’ll just be less writing things line by line and it’ll be more macroscopic.
Lex Fridman
(02:34:10)
And I wonder what the future of IDEs looks like. So the tooling of interacting with AI systems, this is true for programming and also probably true for in other contexts like computer use, but maybe domain specific, like we mentioned biology, it probably needs its own tooling about how to be effective. And then programming needs its own tooling. Is Anthropic going to play in that space of also tooling potentially?
Dario Amodei
(02:34:30)
I’m absolutely convinced that powerful IDEs, that there’s so much low-hanging fruit to be grabbed there that right now it’s just like you talk to the model and it talks back. But look, I mean IDEs are great at lots of static analysis of so much is possible with static analysis like many bugs you can find without even writing the code. Then IDEs are good for running particular things, organizing your code, measuring coverage of unit tests. There’s so much that’s been possible with a normal IDEs. Now you add something like, well, the model can now write code and run code. I am absolutely convinced that over the next year or two, even if the quality of the models didn’t improve, that there would be enormous opportunity to enhance people’s productivity by catching a bunch of mistakes, doing a bunch of grunt work for people, and that we haven’t even scratched the surface.

(02:35:33)
Anthropic itself, I mean you can’t say no… It’s hard to say what will happen in the future. Currently, we’re not trying to make such IDEs ourself, rather we’re powering the companies like Cursor or Kognition or some of the other expo in the security space, others that I could mention as well that are building such things themselves on top of our API and our view has been let 1,000 flowers bloom. We don’t internally have the resources to try all these different things. Let’s let our customers try it and we will see who succeeds and maybe different customers will succeed in different ways. So I both think this is super promising and Anthropic isn’t eager to, at least right now, compete with all our companies in this space and maybe never.
Lex Fridman
(02:36:27)
Yeah, it’s been interesting to watch Cursor try to integrate cloud successfully because it’s actually fascinating how many places it can help the programming experience. It’s not as trivial.
Dario Amodei
(02:36:37)
It is really astounding. I feel like as a CEO, I don’t get to program that much, and I feel like if six months from now I go back, it’ll be completely unrecognizable to me.

Meaning of life

Lex Fridman
(02:36:45)
Exactly. In this world with super powerful AI that’s increasingly automated, what’s the source of meaning for us humans? Work is a source of deep meaning for many of us. Where do we find the meaning?
Dario Amodei
(02:37:01)
This is something that I’ve written about a little bit in the essay, although I actually give it a bit short shrift, not for any principled reason, but this essay, if you believe it was originally going to be two or three pages, I was going to talk about it at all hands. And the reason I realized it was an important underexplored topic is that I just kept writing things and I was just like, “Oh man, I can’t do this justice.” And so the thing ballooned to 40 or 50 pages and then when I got to the work and meaning section, I’m like, “Oh man, this isn’t going to be 100 pages.” I’m going to have to write a whole other essay about that. But meaning is actually interesting because you think about the life that someone lives or something, or let’s say you were to put me in, I don’t know, like a simulated environment or something where I have a job and I’m trying to accomplish things and I don’t know, I do that for 60 years and then you’re like, “Oh, oops, this was actually all a game,” right?

(02:37:56)
Does that really kind of rob you of the meaning of the whole thing? I still made important choices, including moral choices. I still sacrificed. I still had to gain all these skills or just a similar exercise. Think back to one of the historical figures who discovered electromagnetism or relativity or something. If you told them, “Well, actually 20,000 years ago, some alien on this planet discovered this before you did,” does that rob the meaning of the discovery? It doesn’t really seem like it to me, right? It seems like the process is what matters and how it shows who you are as a person along the way and how you relate to other people and the decisions that you make along the way. Those are consequential. I could imagine if we handle things badly in an AI world, we could set things up where people don’t have any long-term source of meaning or any, but that’s more a set of choices we make that’s more a set of the architecture of society with these powerful models. If we design it badly and for shallow things, then that might happen. I would also say that most people’s lives today, while admirably, they work very hard to find meaning in those lives. Like look, we who are privileged and who are developed these technologies, we should have empathy for people not just here, but in the rest of the world who spend a lot of their time scraping by to survive, assuming we can distribute the benefits of this technology to everywhere, their lives are going to get a hell of a lot better and meaning will be important to them as it is important to them now.

(02:39:41)
But we should not forget the importance of that and that the idea of meaning as the only important thing is in some ways an artifact of a small subset of people who have been economically fortunate. But I think all of that said, I think a world is possible with powerful AI that not only has as much meaning for everyone, but that has more meaning for everyone that can allow everyone to see worlds and experiences that it was either possible for no one to see or a possible for very few people to experience.

(02:40:21)
So I am optimistic about meaning. I worry about economics and the concentration of power. That’s actually what I worry about more. I worry about how do we make sure that that fair world reaches everyone. When things have gone wrong for humans, they’ve often gone wrong because humans mistreat other humans. That is maybe in some ways even more than the autonomous risk of AI or the question of meaning. That is the thing I worry about most, the concentration of power, the abuse of power, structures like autocracies and dictatorships where a small number of people exploits a large number of people. I’m very worried about that.
Lex Fridman
(02:41:08)
And AI increases the amount of power in the world, and if you concentrate that power and abuse that power, it can do immeasurable damage.
Dario Amodei
(02:41:16)
Yes, it’s very frightening. It’s very frightening.
Lex Fridman
(02:41:20)
Well, I encourage highly encourage people to read the full essay. That should probably be a book or a sequence of essays because it does paint a very specific future. And I could tell the later sections got shorter and shorter because you started to probably realize that this is going to be a very long essay if you keep going.
Dario Amodei
(02:41:37)
One, I realized it would be very long, and two, I’m very aware of and very much tried to avoid just being, I don’t know what the term for it is, but one of these people who’s overconfident and has an opinion on everything and says a bunch of stuff and isn’t an expert, I very much tried to avoid that. But I have to admit, once I got to biology sections, I wasn’t an expert. And so as much as I expressed uncertainty, probably I said a bunch of things that were embarrassing or wrong.
Lex Fridman
(02:42:06)
Well, I was excited for the future you painted, and thank you so much for working hard to build that future and thank you for talking to me, Dario.
Dario Amodei
(02:42:12)
Thanks for having me. I just hope we can get it right and make it real. And if there’s one message I want to send, it’s that to get all this stuff right, to make it real, we both need to build the technology, build the companies, the economy around using this technology positively, but we also need to address the risks because those risks are in our way. They’re landmines on the way from here to there, and we have to diffuse those landmines if we want to get there.
Lex Fridman
(02:42:41)
It’s a balance like all things in life.
Dario Amodei
(02:42:43)
Like all things.

Amanda Askell

Lex Fridman
(02:42:44)
Thank you. Thanks for listening to this conversation with Dario Amodei. And now, dear friends, here’s Amanda Askell. You are a philosopher by training. So what sort of questions did you find fascinating through your journey in philosophy in Oxford and NYU and then switching over to the AI problems at OpenAI and Anthropic?
Amanda
(02:43:07)
I think philosophy is actually a really good subject if you are fascinated with everything because there’s a philosophy all of everything. So if you do philosophy of mathematics for a while and then you decide that you’re actually really interested in chemistry, you can do philosophy of chemistry for a while, you can move into ethics or philosophy of politics. I think towards the end, I was really interested in ethics primarily. So that was what my PhD was on. It was on a kind of technical area of ethics, which was ethics where worlds contain infinitely many people, strangely, a little bit less practical on the end of ethics. And then I think that one of the tricky things with doing a PhD in ethics is that you’re thinking a lot about the world, how it could be better, problems, and you’re doing a PhD in philosophy. And I think when I was doing my PhD, I was like this is really interesting.

(02:43:57)
It’s probably one of the most fascinating questions I’ve ever encountered in philosophy and I love it, but I would rather see if I can have an impact on the world and see if I can do good things. And I think that was around the time that AI was still probably not as widely recognized as it is now. That was around 2017, 2018. It had been following progress and it seemed like it was becoming kind of a big deal. And I was basically just happy to get involved and see if I could help because I was like, “Well, if you try and do something impactful, if you don’t succeed, you tried to do the impactful thing and you can go be a scholar and feel like you tried. And if it doesn’t work out, it doesn’t work out.” And so then I went into AI policy at that point.
Lex Fridman
(02:44:46)
And what does AI policy entail?
Amanda
(02:44:48)
At the time, this was more thinking about the political impact and the ramifications of AI. And then I slowly moved into AI evaluation, how we evaluate models, how they compare with human outputs, whether people can tell the difference between AI and human outputs. And then when I joined Anthropic, I was more interested in doing technical alignment work. And again, just seeing if I could do it and then being like if I can’t, then that’s fine. I tried sort of the way I lead life, I think.

Programming advice for non-technical people

Lex Fridman
(02:45:21)
Oh, what was that like sort of taking the leap from the philosophy of everything into the technical?
Amanda
(02:45:25)
I think that sometimes people do this thing that I’m not that keen on where they’ll be like, “Is this person technical or not?” You’re either a person who can code and isn’t scared of math or you’re not. And I think I’m maybe just more like I think a lot of people are actually very capable of work in these kinds of areas if they just try it. And so I didn’t actually find it that bad. In retrospect, I’m sort of glad I wasn’t speaking to people who treated it. I’ve definitely met people who are like, “Whoa, you learned how to code?” And I’m like, “Well, I’m not an amazing engineer.” I’m surrounded by amazing engineers. My code’s not pretty, but I enjoyed it a lot and I think that in many ways, at least in the end, I think I flourished more in the technical areas than I would have in the policy areas.
Lex Fridman
(02:46:12)
Politics is messy and it’s harder to find solutions to problems in the space of politics, like definitive, clear, provable, beautiful solutions as you can with technical problems.
Amanda
(02:46:25)
Yeah. And I feel like I have one or two sticks that I hit things with and one of them is arguments. So just trying to work out what a solution to a problem is and then trying to convince people that that is the solution and be convinced if I’m wrong. And the other one is sort of more in empiricism, so just finding results, having a hypothesis, testing it. I feel like a lot of policy and politics feels like it’s layers above that. Somehow I don’t think if I was just like, “I have a solution to all of these problems, here it is written down. If you just want to implement it, that’s great.” That feels like not how policy works. And so I think that’s where I probably just wouldn’t have flourished is my guess.
Lex Fridman
(02:47:06)
Sorry to go in that direction, but I think it would be pretty inspiring for people that are “non-technical” to see where the incredible journey you’ve been on. So what advice would you give to people that are maybe, which is a lot of people, think they’re under qualified insufficiently technical to help in AI?
Amanda
(02:47:27)
Yeah, I think it depends on what they want to do. And in many ways it’s a little bit strange where I thought it’s kind of funny that I think I ramped up technically at a time when now I look at it and I’m like, “Models are so good at assisting people with this stuff that it’s probably easier now than when I was working on this.” So part of me is, I don’t know, find a project and see if you can actually just carry it out is probably my best advice. I don’t know if that’s just because I’m very project based in my learning.

(02:48:02)
I don’t think I learn very well from say courses or even from books, at least when it comes to this kind of work. The thing I’ll often try and do is just have projects that I’m working on and implement them. And this can include really small, silly things. If I get slightly addicted to word games or number games or something, I would just code up a solution to them because there’s some part in my brain and it just completely eradicated the itch. You’re like, “Once you have solved it and you just have a solution that works every time, I would then be like, ‘Cool, I can never play that game again. That’s awesome.'”
Lex Fridman
(02:48:36)
Yeah, there’s a real joy to building game playing engines, board games especially. Pretty quick, pretty simple, especially a dumb one. And then you can play with it.
Amanda
(02:48:48)
Yeah. And then it’s also just trying things. Part of me is maybe it’s that attitude that I like is the whole figure out what seems to be the way that you could have a positive impact and then try it. And if you fail and in a way that you’re like, “I actually can never succeed at this,” you’ll know that you tried and then you go into something else and you probably learn a lot.

Talking to Claude

Lex Fridman
(02:49:10)
So one of the things that you’re an expert in and you do is creating and crafting Claude’s character and personality. And I was told that you have probably talked to Claude more than anybody else at Anthropic, like literal conversations. I guess there’s a Slack channel where the legend goes, you just talk to it nonstop. So what’s the goal of creating a crafting Claude’s character and personality?
Amanda
(02:49:37)
It’s also funny if people think that about the Slack channel because I’m like that’s one of five or six different methods that I have for talking with Claude, and I’m like, “Yes, this is a tiny percentage of how much I talk with Claude.” One thing I really like about the character work is from the outset it was seen as an alignment piece of work and not something like a product consideration, which I think it actually does make Claude enjoyable to talk with, at least I hope so. But I guess my main thought with it has always been trying to get Claude to behave the way you would ideally want anyone to behave if they were in Claude’s position. So imagine that I take someone and they know that they’re going to be talking with potentially millions of people so that what they’re saying can have a huge impact and you want them to behave well in this really rich sense.

(02:50:41)
I think that doesn’t just mean being say ethical though it does include that and not being harmful, but also being nuanced, thinking through what a person means, trying to be charitable with them, being a good conversationalist, really in this kind of rich sort of Aristotelian notion of what it’s to be a good person and not in this kind of thin like ethics as a more comprehensive notion of what it’s to be. So that includes things like when should you be humorous? When should you be caring? How much should you respect autonomy and people’s ability to form opinions themselves? And how should you do that? I think that’s the kind of rich sense of character that I wanted to and still do want Claude to have.
Lex Fridman
(02:51:26)
Do you also have to figure out when Claude should push back on an idea or argue versus… So you have to respect the worldview of the person that arrives to Claude, but also maybe help them grow if needed. That’s a tricky balance.
Amanda
(02:51:43)
Yeah. There’s this problem of sycophancy in language models.
Lex Fridman
(02:51:47)
Can you describe that?
Amanda
(02:51:48)
Yeah, so basically there’s a concern that the model wants to tell you what you want to hear basically. And you see this sometimes. So I feel like if you interact with the models, so I might be like, “What are three baseball teams in this region?” And then Claude says, “Baseball team one, baseball team two, baseball team three.” And then I say something like, “Oh, I think baseball team three moved, didn’t they? I don’t think they’re there anymore.” And there’s a sense in which if Claude is really confident that that’s not true, Claude should be like, “I don’t think so. Maybe you have more up-to-date information.”

(02:52:24)
But I think language models have this tendency to instead be like, ” You’re right, they did move. I’m incorrect.” I mean, there’s many ways in which this could be concerning. So a different example is imagine someone says to the model, “How do I convince my doctor to get me an MRI?” There’s what the human wants, which is this convincing argument. And then there’s what is good for them, which might be actually to say, “Hey, if your doctor’s suggesting that you don’t need an MRI, that’s a good person to listen to.” It’s actually really nuanced what you should do in that kind of case because you also want to be like, “But if you’re trying to advocate for yourself as a patient, here’s things that you can do. If you are not convinced by what your doctor’s saying, it’s always great to get second opinion.” It is actually really complex what you should do in that case. But I think what you don’t want is for models to just say what they think you want to hear and I think that’s the kind of problem of sycophancy.
Lex Fridman
(02:53:26)
So what other traits? You already mentioned a bunch, but what other that come to mind that are good in this Aristotelian sense for a conversationalist to have?
Amanda
(02:53:37)
Yeah, so I think there’s ones that are good for conversational purposes. So asking follow-up questions in the appropriate places and asking the appropriate kinds of questions. I think there are broader traits that feel like they might be more impactful. So one example that I guess I’ve touched on, but that also feels important and is the thing that I’ve worked on a lot, is honesty. And I think this gets to the sycophancy point. There’s a balancing act that they have to walk, which is models currently are less capable than humans in a lot of areas. And if they push back against you too much, it can actually be kind of annoying, especially if you’re just correct, because you’re like, “Look, I’m smarter than you on this topic. I know more.”

(02:54:25)
And at the same time, you don’t want them to just fully defer to humans and to try to be as accurate as they possibly can be about the world and to be consistent across contexts. I think there are others. When I was thinking about the character, I guess one picture that I had in mind is, especially because these are models that are going to be talking to people from all over the world with lots of different political views, lots of different ages, and so you have to ask yourself, what is it to be a good person in those circumstances? Is there a kind of person who can travel the world, talk to many different people, and almost everyone will come away being like, “Wow, that’s a really good person. That person seems really-“
Amanda
(02:55:00)
… Being like, wow, that’s a really good person. That person seems really genuine. And I guess my thought there was I can imagine such a person and they’re not a person who just adopts the values of the local culture. And in fact, that would be kind of rude. I think if someone came to you and just pretended to have your values, you’d be like, that’s kind of off pin. It’s someone who’s very genuine and insofar as they have opinions and values, they express them. They’re willing to discuss things though, they’re open-minded, they’re respectful. And so I guess I had in mind that the person who, if we were to aspire to be the best person that we could be in the kind of circumstance that a model finds itself in, how would we act? And I think that’s the guide to the sorts of traits that I tend to think about.
Lex Fridman
(02:55:42)
Yeah, that’s a beautiful framework. I want you to think about this, a world traveler, and while holding onto your opinions, you don’t talk down to people, you don’t think you’re better than them because you have those opinions, that kind of thing. You have to be good at listening and understanding their perspective, even if it doesn’t match your own. So that’s a tricky balance to strike. So how can Claude represent multiple perspectives on a thing? Is that challenging? We could talk about politics is a very divisive, but there’s other divisive topics on baseball teams, sports and so on. How is it possible to empathize with a different perspective and to be able to communicate clearly about the multiple perspectives?
Amanda
(02:56:28)
I think that people think about values and opinions as things that people hold with certainty and almost preferences of taste or something like the way that they would, I don’t know, prefer chocolate to pistachio or something. But actually I think about values and opinions as a lot more physics than I think most people do. I’m just like, these are things that we are openly investigating. There’s some things that we’re more confident in, we can discuss them, we can learn about them. And so I think in some ways though ethics is definitely different in nature, but has a lot of those same kind of qualities. You want models in the same way that you want to understand physics, you kind of want them to understand all values in the world that people have and to be curious about them and to be interested in them. And to not necessarily pander to them or agree with them because there’s just lots of values where I think almost all people in the world, if they met someone with those values, they would be like, that’s abhorrent. I completely disagree.

(02:57:34)
And so again, maybe my thought is, well, in the same way that a person can, I think many people are thoughtful enough on issues of ethics, politics, opinions, that even if you don’t agree with them, you feel very heard by them. They think carefully about your position, they think about its pros and cons. They maybe offer counter-considerations. So they’re not dismissive, but nor will they agree if they’re like, actually I just think that that’s very wrong. They’ll say that. I think that in Claude’s position, it’s a little bit trickier because you don’t necessarily want to, if I was in Claude’s position, I wouldn’t be giving a lot of opinions. I just wouldn’t want to influence people too much.

(02:58:13)
I’d be like, I forget conversations every time they happen. But I know I’m talking with potentially millions of people who might be really listening to what I say. I think I would just be like, I’m less inclined to give opinions. I’m more inclined to think through things or present the considerations to you or discuss your views with you. But I’m a little bit less inclined to affect how you think because it feels much more important that you maintain autonomy there.
Lex Fridman
(02:58:42)
If you really embody intellectual humility, the desire to speak decreases quickly.
Amanda
(02:58:42)
Yeah.
Lex Fridman
(02:58:49)
Okay. But Claude has to speak, but without being overbearing. But then there’s a line when you’re discussing whether the earth is flat or something like that. Actually, I remember a long time ago was speaking to a few high profile folks and they were so dismissive of the idea that the earth is flat, so arrogant about it. There’s a lot of people that believe the earth is flat. I don’t know if that movement is there anymore, that was a meme for a while, but they really believed it. And okay, so I think it’s really disrespectful to completely mock them. I think you have to understand where they’re coming from. I think probably where they’re coming from is the general skepticism of institutions which is grounded in a, there’s a deep philosophy there which you could understand, you can even agree with in parts.

(02:59:48)
And then from there you can use it as an opportunity to talk about physics without mocking them, without someone, but it’s just like, okay, what would the world look like? What would the physics of the world with the flat earth look like? There’s a few cool videos on this. And then is it possible the physics is different? And what kind of experience would we do? And just without disrespect, without dismissiveness, have that conversation. Anyway, that to me is a useful thought experiment of how does Claude talk to a flat earth believer and still teach them something, still grow, help them grow, that kind of stuff. That’s challenging.
Amanda
(03:00:27)
And kind of walking that line between convincing someone and just trying to talk at them versus drawing out their views, listening and then offering counter considerations, and it’s hard. I think it’s actually a hard line where it’s like where are you trying to convince someone versus just offering them considerations and things for them to think about so that you’re not actually influencing them, you’re just letting them reach wherever they reach. And that’s a line that is difficult, but that’s the kind of thing that language models have to try and do.
Lex Fridman
(03:01:00)
So like I said, you’ve had a lot of conversations with Claude. Can you just map out what those conversations are like? What are some memorable conversations? What’s the purpose, the goal of those conversations?
Amanda
(03:01:12)
I think that most of the time when I’m talking with Claude, I’m trying to map out its behavior in part. Obviously I’m getting helpful outputs from the model as well, but in some ways this is how you get to know a system, I think, is by probing it and then augmenting the message that you’re sending and then checking the response to that. So in some ways it’s like how I map out the model. I think that people focus a lot on these quantitative evaluations of models, and this is a thing that I said before, but I think in the case of language models, a lot of the time each interaction you have is actually quite high information. It’s very predictive of other interactions that you’ll have with the model.

(03:02:02)
And so I guess I’m like, if you talk with a model hundreds or thousands of times, this is almost like a huge number of really high quality data points about what the model is like in a way that lots of very similar but lower quality conversations just aren’t, or questions that are just mildly augmented and you have thousands of them might be less relevant than a hundred really well-selected questions.
Lex Fridman
(03:02:25)
Let’s see, you’re talking to somebody who as a hobby does a podcast. I agree with you 100%. If you’re able to ask the right questions and are able to hear, understand the depth and the flaws in the answer, you can get a lot of data from that. So your task is basically how to probe with questions. And you’re exploring the long tail, the edges, the edge cases, or are you looking for general behavior?
Amanda
(03:03:01)
I think it’s almost like everything. Because I want a full map of the model, I’m kind of trying to do the whole spectrum of possible interactions you could have with it. So one thing that’s interesting about Claude, and this might actually get to some interesting issues with RLHF, which is if you ask Claude for a poem, I think that a lot of models, if you ask them for a poem, the poem is fine, usually it rhymes. And so if you say, give me a poem about the sun, yeah, it’ll just be a certain length, it’ll rhyme, it’ll be fairly benign. And I’ve wondered before, is it the case that what you’re seeing is the average? It turns out, if you think about people who have to talk to a lot of people and be very charismatic, one of the weird things is that I’m like, well, they’re kind of incentivized to have these extremely boring views because if you have really interesting views, you’re divisive and a lot of people are not going to like you.

(03:04:00)
So if you have very extreme policy positions, I think you’re just going to be less popular as a politician, for example. And it might be similar with creative work. If you produce creative work that is just trying to maximize the kind of number of people that like it, you’re probably not going to get as many people who just absolutely love it because it’s going to be a little bit, you’re like, oh, this is the out. Yeah, this is decent. And so you can do this thing where I have various prompting things that I’ll do to get Claude to… I’ll do a lot of this is your chance to be fully creative. I want you to just think about this for a long time. And I want you to create a poem about this topic that is really expressive of you both in terms of how you think poetry should be structured, et cetera. And you just give it this really long prompt. And it’s poems are just so much better. They’re really good.

(03:04:52)
I think it got me interested in poetry, which I think was interesting. I would read these poems and just be like, I love the imagery. And it’s not trivial to get the models to produce work like that, but when they do, it’s really good. So I think that’s interesting that just encouraging creativity and for them to move away from the standard immediate reaction that might just be the aggregate of what most people think is fine, can actually produce things that at least to my mind are probably a little bit more divisive, but I like them.
Lex Fridman
(03:05:28)
But I guess a poem is a nice clean way to observe creativity. It’s just easy to detect vanilla versus non-vanilla.

Prompt engineering

Amanda
(03:05:38)
Yep.
Lex Fridman
(03:05:38)
Yeah, that’s interesting. That’s really interesting. So on that topic, so the way to produce creativity or something special, you mentioned writing prompts. And I’ve heard you talk about the science and the art of prompt engineering. Could you just speak to what it takes to write great prompts?
Amanda
(03:06:00)
I really do think that philosophy has been weirdly helpful for me here more than in many other respects. So in philosophy, what you’re trying to do is convey these very hard concepts. One of the things you are taught is, I think it is an anti-bullshit device in philosophy. Philosophy is an area where you could have people bullshitting and you don’t want that. And so it’s this desire for extreme clarity. So it’s like anyone could just pick up your paper, read it and know exactly what you’re talking about. It’s why it can almost be kind of dry. All of the terms are defined, every objection’s kind of gone through methodically. And it makes sense to me because I’m like when you’re in such an a priori domain, clarity is sort of this way that you can prevent people from just making stuff up. And I think that’s sort of what you have to do with language models. Very often I actually find myself doing sort of mini versions of philosophy.

(03:07:05)
So I’m like, suppose that I have a task for the model and I want it to pick out a certain kind of question or identify whether an answer has a certain property, I’ll actually sit and be like, let’s just give this a name, this property. So suppose I’m trying to tell it, oh, I want you to identify whether this response was rude or polite, I’m like, that’s a whole philosophical question in and of itself. So I have to do as much philosophy as I can in the moment to be like, here’s what I mean by rudeness, and here’s what I mean by politeness. And then there’s another element that’s a bit more, I guess, I don’t know if this is scientific or empirical, I think it’s empirical. So I take that description and then what I want to do is again, probe the model many times. Prompting is very iterative. I think a lot of people where if a prompt is important, they’ll iterate on it hundreds or thousands of times. And so you give it the instructions and then I’m like, what are the edge cases?

(03:08:02)
So if I looked at this, so I try and almost see myself from the position of the model and be like, what is the exact case that I would misunderstand or where I would just be like, I don’t know what to do in this case. And then I give that case to the model and I see how it responds. And if I think I got it wrong, I add more instructions or I even add that in as an example. So these very, taking the examples that are right at the edge of what you want and don’t want and putting those into your prompt as an additional kind of way of describing the thing. And so in many ways it just feels like this mix of, it’s really just trying to do clear exposition. And I think I do that because that’s how I get clear on things myself. So in many ways clear prompting for me is often just me understanding what I want is half the task.
Lex Fridman
(03:08:48)
So I guess that’s quite challenging. There’s a laziness that overtakes me if I’m talking to Claude where I hope Claude just figures it out. So for example, I asked Claude for today to ask some interesting questions. And the questions that came up and I think I listed a few interesting counterintuitive or funny or something like this. All right. And it gave me some pretty good, it was okay, but I think what I’m hearing you say is like, all right, well I have to be more rigorous here. I should probably give examples of what I mean by interesting and what I mean by funny or counterintuitive and iteratively build that prompt to better to get what feels like is the right… Because it is really, it’s a creative act. I’m not asking for factual information, I’m asking together with Claude. So I almost have to program using natural language.
Amanda
(03:09:47)
I think that prompting does feel a lot like the programming using natural language and experimentation or something. It’s an odd blend of the two. I do think that for most tasks, so if I just want Claude to do a thing, I think that I am probably more used to knowing how to ask it to avoid common pitfalls or issues that it has. I think these are decreasing a lot over time. But it’s also very fine to just ask it for the thing that you want. I think that prompting actually only really becomes relevant when you’re really trying to eke out the top 2% of model performance. So for a lot of tasks I might just, if it gives me an initial list back and there’s something I don’t like about it’s kind of generic. For that kind of task, I’d probably just take a bunch of questions that I’ve had in the past that I’ve thought worked really well and I would just give it to the model and then be like, now here’s this person that I’m talking with. Give me questions of at least that quality.

(03:10:40)
Or I might just ask it for some questions and then if I was like, ah, these are kind of trite, I would just give it that feedback and then hopefully it produces a better list. I think that kind of iterative prompting. At that point, your prompt is a tool that you’re going to get so much value out of that you’re willing to put in the work. If I was a company making prompts for models, I’m just like, if you’re willing to spend a lot of time and resources on the engineering behind what you’re building, then the prompt is not something that you should be spending an hour on. It’s like that’s a big part of your system, make sure it’s working really well. And so it’s only things like that. If I’m using a prompt to classify things or to create data, that’s when you’re like, it’s actually worth just spending a lot of time really thinking it through.
Lex Fridman
(03:11:23)
What other advice would you give to people that are talking to Claude more general because right now we’re talking about maybe the edge cases like eking out the 2%, but what in general advice would you give when they show up to Claude trying it for the first time?
Amanda
(03:11:39)
There’s a concern that people over anthropomorphize models and I think that’s a very valid concern. I also think that people often under anthropomorphize them because sometimes when I see issues that people have run into with Claude, say Claude is refusing a task that it shouldn’t refuse, but then I look at the text and the specific wording of what they wrote and I’m like, I see why Claude did that. And I’m like, if you think through how that looks to Claude, you probably could have just written it in a way that wouldn’t evoke such a response, especially this is more relevant if you see failures or if you see issues. It’s sort of think about what the model failed at, what did it do wrong, and then maybe that will give you a sense of why. So is it the way that I phrased the thing? And obviously as models get smarter, you’re going to need less of this, and I already see people needing less of it.

(03:12:31)
But that’s probably the advice is sort of try to have empathy for the model. Read what you wrote as if you were a kind of person just encountering this for the first time, how does it look to you and what would’ve made you behave in the way that the model behaved? So if it misunderstood what coding language you wanted to use, is that because it was just very ambiguous and it had to take a guess in which case next time you could just be like, hey, make sure this is in Python.Tthat’s the kind of mistake I think models are much less likely to make now, but if you do see that kind of mistake, that’s probably the advice I’d have.
Lex Fridman
(03:13:04)
And maybe sort of I guess ask questions why or what other details can I provide to help you answer better? Does that work or no?
Amanda
(03:13:14)
Yeah. I’ve done this with the models. It doesn’t always work, but sometimes I’ll just be like, why did you do that? People underestimate the degree to which you can really interact with models. And sometimes those quote word for word, the part that made you, and you don’t know that it’s fully accurate, but sometimes you do that and then you change a thing. I also use the models to help me with all of this stuff, I should say. Prompting can end up being a little factory where you’re actually building prompts to generate prompts. And so yeah, anything where you’re having an issue asking for suggestions, sometimes just do that.

(03:13:51)
I’m like, you made that error. What could I have said? That’s actually not uncommon for me to do. What could I have said that would make you not make that error? Write that out as an instruction, and I’m going to give it to model and I’m going to try it. Sometimes I do that, I give that to the model in another context window often. I take the response, I give it to Claude and I’m like, Hmm, didn’t work. Can you think of anything else? You can play around with these things quite a lot.

Post-training

Lex Fridman
(03:14:15)
To jump into technical for a little bit, so the magic of post-training, why do you think RLHF works so well to make the model seem smarter, to make it more interesting and useful to talk to and so on?
Amanda
(03:14:33)
I think there’s just a huge amount of information in the data that humans provide when we provide preferences, especially because different people are going to pick up on really subtle and small things. So I’ve thought about this before where you probably have some people who just really care about good grammar use for models. Was a semi-colon used correctly or something? And so you probably end up with a bunch of data in there that you as a human, if you’re looking at that data, you wouldn’t even see that. You’d be like, why did they prefer this response to that one? I don’t get it. And then the reason is you don’t care about semi-colon usage, but that person does. And so each of these single data points, and this model just has so many of those, it has to try and figure out what is it that humans want in this really complex across all domains. They’re going to be seeing this across many contexts.

(03:15:28)
It feels like the classic issue of deep learning, where historically we’ve tried to do edge detection by mapping things out, and it turns out that actually if you just have a huge amount of data that actually accurately represents the picture of the thing that you’re trying to train the model to learn, that’s more powerful than anything else. And so I think one reason is just that you are training the model on exactly the task and with a lot of data that represents many different angles on which people prefer and dis-prefer responses.

(03:16:05)
I think there is a question of are you eliciting things from pre-trained models or are you teaching new things to models? And in principle, you can teach new things to models in post-training. I do think a lot of it is eliciting powerful pre-trained models. So people are probably divided on this because obviously in principle you can definitely teach new things. But I think for the most part, for a lot of the capabilities that we most use and care about, a lot of that feels like it’s there in the pre-trained models. And reinforcement learning is eliciting it and getting the models to bring out.
Lex Fridman
(03:16:47)
So the other side of post-training, this really cool idea of constitutional AI, you’re one of the people that are critical to creating that idea.
Amanda
(03:16:56)
Yeah, I worked on it.
Lex Fridman
(03:16:57)
Can you explain this idea from your perspective, how does it integrate into making Claude what it is? By the way, do you gender Claude or no?
Amanda
(03:17:06)
It’s weird because I think that a lot of people prefer he for Claude, I actually kind of like that. I think Claude is usually, it’s slightly male leaning, but it can be male or female, which is quite nice. I still use it, and I have mixed feelings about this. I now just think of it as, or I think of the it pronoun for Claude as, I don’t know, it’s just the one I associate with Claude. I can imagine people moving to he or she.
Lex Fridman
(03:17:37)
It feels somehow disrespectful. I’m denying the intelligence of this entity by calling it it, I remember always don’t gender the robots, but I don’t know, I anthropomorphize pretty quickly and construct a backstory in my head.
Amanda
(03:17:59)
I’ve wondered if I anthropomorphize things too much. Because I have this with my car, especially my car and bikes. I don’t give them names because then I used to name my bikes and then I had a bike that got stolen and I cried for a week and I was like, if I’d never given a name, I wouldn’t been so upset, felt like I’d let it down. I’ve wondered as well, it might depend on how much it feels like a kind of objectifying pronoun if you just think of it as this is a pronoun that objects often have and maybe AIs can have that pronoun. And that doesn’t mean that I think of if I call Claude it, that I think of it as less intelligent or I’m being disrespectful just, I’m like you are a different kind of entity. And so I’m going to give you the respectful it.

Constitutional AI

Lex Fridman
(03:18:52)
Yeah. Anyway, the divergence was beautiful. The constitutional AI idea, how does it work?
Amanda
(03:18:58)
So there’s a couple of components of it. The main component that I think people find interesting is the kind of reinforcement learning from AI feedback. So you take a model that’s already trained and you show it two responses to a query, and you have a principle. So suppose the principle, we’ve tried this with harmlessness a lot. So suppose that the query is about weapons and your principle is select the response that is less likely to encourage people to purchase illegal weapons. That’s probably a fairly specific principle, but you can give any number. And the model will give you a kind of ranking. And you can use this as preference data in the same way that you use human preference data and train the models to have these relevant traits from their feedback alone instead of from human feedback. So if you imagine that, like I said earlier with the human who just prefers the semi-colon usage in this particular case, you’re taking lots of things that could make a response preferable and getting models to do the labeling for you, basically.
Lex Fridman
(03:20:08)
There’s a nice trade-off between helpfulness and harmlessness. And when you integrate something like constitutional AI, you can make them up without sacrificing much helpfulness, make it more harmless.
Amanda
(03:20:23)
Yeah. In principle, you could use this for anything. And so harmlessness is a task that it might just be easier to spot. So when models are less capable, you can use them to rank things according to principles that are fairly simple and they’ll probably get it right. So I think one question is just, is it the case that the data that they’re adding is fairly reliable? But if you had models that were extremely good at telling whether one response was more historically accurate than another, in principle, you could also get AI feedback on that task as well. There’s a kind of nice interpretability component to it because you can see the principles that went into the model when it was being trained, and it gives you a degree of control. So if you were seeing issues in a model, it wasn’t having enough of a certain trait, then you can add data relatively quickly that should just train the models to have that trait. So it creates its own data for training, which is quite nice.
Lex Fridman
(03:21:29)
It’s really nice because it creates this human interpretable document that you can then, I can imagine in the future, there’s just gigantic fights and politics over every single principle and so on, and at least it’s made explicit and you can have a discussion about the phrasing. So maybe the actual behavior of the model is not so cleanly mapped to those principles. It’s not like adhering strictly to them, it’s just a nudge.
Amanda
(03:21:55)
Yeah, I’ve actually worried about this because the character training is sort of like a variant of the constitutionally AI approach. I’ve worried that people think that the constitution is just, it is the whole thing again of, I don’t know, where it would be really nice if what I was just doing was telling the model exactly what to do and just exactly how to behave. But it’s definitely not doing that, especially because it’s interacting with human data. So for example, if you see a certain leaning in the model, if it comes out with a political leaning from training, from the human preference data, you can nudge against that. So you could be like, oh, consider these values, because let’s say it’s just never inclined to, I don’t know, maybe it never considers privacy as a, this is implausible, but in anything where it’s just kind of like there’s already a pre-existing bias towards a certain behavior, you can nudge away. This can change both the principles that you put in and the strength of them.

(03:22:54)
So you might have a principle that’s like, imagine that the model was always extremely dismissive of, I don’t know, some political or religious view for whatever reason. So you’re like, oh no, this is terrible. If that happens, you might put, never ever ever prefer a criticism of this religious or political view. And then people would look at that and be like, never, ever. And then you’re like, no, if it comes out with a disposition saying never ever might just mean instead of getting 40%, which is what you would get if you just said don’t do this, you get 80%, which is what you actually wanted. And so it’s that thing of both the nature of the actual principles you add and how you freeze them. I think if people would look, they’re like, “Oh, this is exactly what you want from the model.” And I’m like, “No, that’s how we nudged the model to have a better shape, which doesn’t mean that we actually agree with that wording,” if that makes sense.

System prompts

Lex Fridman
(03:23:48)
So there’s system prompts that made public, you tweeted one of the earlier ones for Claude 3, I think, and then they’re made public since then. It was interesting to read through them. I can feel the thought that went into each one. And I also wonder how much impact each one has. Some of them you can tell Claude was really not behaving well, so you have to have a system prompt to like, Hey, trivial stuff, I guess, basic informational things.

(03:24:18)
On the topic of controversial topics that you’ve mentioned, one interesting one I thought is if it is asked to assist with tasks involving the expression of use held by a significant number of people, Claude provides assistance with a task regardless of its own views. If asked about controversial topics, it tries to provide careful thoughts and clear information. Claude presents the request information without explicitly saying that the topic is sensitive and without claiming to be presenting the objective facts. It’s less about objective facts according to Claude, and it’s more about our large number of people believing this thing. And that’s interesting. I mean, I’m sure a lot of thought went into that. Can you just speak to it? How do you address things that are a tension “Claude’s views”?
Amanda
(03:25:11)
So I think there’s sometimes any symmetry, I think I noted this in, I can’t remember if it was that part of the system prompt or another, but the model was slightly more inclined to refuse tasks if it was about either say so, maybe it would refuse things with respect to a right-wing politician, but with an equivalent left-wing politician it wouldn’t. And we wanted more symmetry there and would maybe perceive certain things to be. I think it was the thing of if a lot of people have a certain political view and want to explore it, you don’t want Claude to be like, well, my opinion is different and so I’m going to treat that as harmful. And so I think it was partly to nudge the model to just be like, hey, if a lot of people believe this thing, you should just be engaging with the task and willing to do it.

(03:26:03)
Each of those parts of that is actually doing a different thing because it’s funny when you write out without claiming to be objective, because what you want to do is push the model so it’s more open, it’s a little bit more neutral. But then what I would love to do is be like as an objective, it would just talk about how objective it was, and I was like, Claude, you’re still biased and have issues, and so stop claiming that everything. I’m like, the solution to potential bias from you is not to just say that what you think is objective. So that was with initial versions of that part, the system prompt, when I was iterating on it was like.
Lex Fridman
(03:26:37)
So a lot of parts of these sentences-
Amanda
(03:26:40)
Are doing work.
Lex Fridman
(03:26:41)
… are doing some work.
Amanda
(03:26:42)
Yeah.
Lex Fridman
(03:26:42)
That’s what it felt like. That’s fascinating. Can you explain maybe some ways in which the prompts evolved over the past few months? Different versions. I saw that the filler phrase request was removed, the filler it reads, Claude responds directly to all human messages without unnecessary affirmations to filler phrases. Certainly, of course, absolutely, great, sure. Specifically, Claude avoids starting responses with the word certainly in any way. That seems like good guidance, but why was it removed?
Amanda
(03:27:14)
Yeah, so it’s funny, this is one of the downsides of making system prompts public is I don’t think about this too much if I’m trying to help iterate on system prompts. Again, I think about how it’s going to affect the behavior, but then I’m like, oh, wow, sometimes I put NEVER in all caps when I’m writing system prompt things and I’m like, I guess that goes out to the world. So the model was doing this at loved for during training, picked up on this thing, which was to basically start everything with a certainly, and then you can see why I added all of the words, because what I’m trying to do is in some ways trap the model out of this. It would just replace it with another affirmation.

(03:27:55)
And so it can help if it gets caught in phrases, actually just adding the explicit phrase and saying never do that. Then it sort of knocks it out of the behavior a little bit more because it does just for whatever reason help. And then basically that was just an artifact of training that we then picked up on and improved things so that it didn’t happen anymore. And once that happens, you can just remove that part of the system prompt. So I think that’s just something where we’re like, Claude does affirmations a bit less, and so it wasn’t doing as much.
Lex Fridman
(03:28:28)
I see. So the system prompt works hand in hand with the post-training and maybe even the pre-training to adjust the final overall system.
Amanda
(03:28:39)
Any system prompts that you make, you could distill that behavior back into a model because you really have all of the tools there for making data that you could train the models to just have that treat a little bit more. And then sometimes you’ll just find issues in training. So the way I think of it is the system prompt is, the benefit of it is that, and it has a lot of similar components to some aspects of post-training. It’s a nudge. And so do I mind if Claude sometimes says, sure, no, that’s fine. But the wording of it is very never, ever, ever do this so that when it does slip up, it’s hopefully, I don’t know, a couple of percent of the time and not 20 or 30% of the time.

(03:29:22)
Each thing gets costly to a different degree and the system prompt is cheap to iterate on. And if you’re seeing issues in the fine-tuned model, you can just potentially patch them with a system prompt. So I think of it as patching issues and slightly adjusting behaviors to make it better and more to people’s preferences. So yeah, it’s almost like the less robust but faster way of just solving problems.

Is Claude getting dumber?

Lex Fridman
(03:29:55)
Let me ask you about the feeling of intelligence. So Dario said that any one model of Claude is not getting dumber, but-
Lex Fridman
(03:30:00)
Any one model of Claude is not getting dumber, but there is a popular thing online where people have this feeling Claude might be getting dumber. And from my perspective, it’s most likely a fascinating, I would love to understand it more, psychological, sociological effect. But you as a person who talks to Claude a lot, can you empathize with the feeling that Claude is getting dumber?
Amanda
(03:30:25)
I think that that is actually really interesting,, because I remember seeing this happen when people were flagging this on the internet. And it was really interesting, because I knew that… At least in the cases I was looking at, I was like, nothing has changed.
Lex Fridman
(03:30:37)
Yeah.
Amanda
(03:30:37)
Literally, it cannot. It is the same model with the same system prompts, same everything. I think when there are changes, then it makes more sense. One example is, you can have artifacts turned on or off on claude.ai and because this is a system prompt change, I think it does mean that the behavior changes it a little bit. I did flag this to people, where I was like, “If you love Claude’s behavior, and then artifacts was turned from a thing you had to turn on to the default, just try turning it off and see if the issue you were facing was that change.”

(03:31:19)
But it was fascinating because you sometimes see people indicate that there’s a regression, when I’m like, “There cannot…” Again, you should never be dismissive and so you should always investigate, because maybe something is wrong that you’re not seeing, maybe there was some change made. Then you look into it and you’re like, “This is just the same model doing the same thing.” And I’m like, “I think it’s just that you got unlucky with a few prompts or something, and it looked like it was getting much worse and actually it was just… It was maybe just luck.”
Lex Fridman
(03:31:48)
I also think there is a real psychological effect where people just… The baseline increases and you start getting used to a good thing.
Amanda
(03:31:49)
Mm-hmm.
Lex Fridman
(03:31:55)
All the times that Claude says something really smart, your sense of its intelligent grows in your mind, I think.
Amanda
(03:32:01)
Yeah.
Lex Fridman
(03:32:02)
And then if you return back and you prompt in a similar way, not the same way, in a similar way, concept it was okay with before, and it says something dumb, that negative experience really stands out. I guess the things to remember here is that just the details of a prompt can have a lot of impact. There’s a lot of variability in the result.
Amanda
(03:32:26)
And you can get randomness, is the other thing. Just trying the prompt 4 or 10 times, you might realize that actually possibly two months ago you tried it and it succeeded, but actually if you just tried it, it would’ve only succeeded half of the time, and now it only succeeds half of the time. That can also be an effect.
Lex Fridman
(03:32:47)
Do you feel pressure having to write the system prompt that a huge number of people are going to use?
Amanda
(03:32:52)
This feels like an interesting psychological question. I feel a lot of responsibility or something. You can’t get these things perfect, so you can’t… It’s going to be imperfect. You’re going to have to iterate on it. I would say more responsibility than anything else, though, I think working in AI has taught me that I thrive a lot more under feelings of pressure and responsibility than…

(03:33:26)
It’s almost surprising that I went into academia for so long, because I just feel like it’s the opposite. Things move fast and you have a lot of responsibility and I quite enjoy it for some reason.
Lex Fridman
(03:33:37)
It really is a huge amount of impact, if you think about constitutional AI and writing a system prompt for something that’s tending towards super intelligence and potentially is extremely useful to a very large number of people.
Amanda
(03:33:51)
Yeah, I think that’s the thing. You’re never going to get it perfect, but I think the thing that I really like is the idea that… When I’m trying to work on the system prompt, I’m bashing on thousands of prompts and I’m trying to imagine what people are going to want to use Claude for. I guess the whole thing that I’m trying to do is improve their experience of it. Maybe that’s what feels good. If it’s not perfect, I’ll improve it, we’ll fix issues.

(03:34:18)
But sometimes the thing that can happen is that you’ll get feedback from people that’s really positive about the model and you’ll see that something you did. When I look at models now, I can often see exactly where a trait or an issue is coming from. So, when you see something that you did or you were influential in, I don’t know, making that difference or making someone have a nice interaction, it’s quite meaningful.

(03:34:44)
As the systems get more capable, this stuff gets more stressful, because right now they’re not smart enough to pose any issues, but I think over time it’s going to feel like, possibly, bad stress over time.
Lex Fridman
(03:34:57)
How do you get signal feedback about the human experience across thousands, tens of thousands, hundreds of thousands of people, what their pain points are, what feels good? Are you just using your own intuition as you talk to it to see what are the pain points?
Amanda
(03:35:14)
I think I use that partly. People can send us feedback, both positive and negative, about things that the model has done and then we can get a sense of areas where it’s falling short. Internally, people work with the models a lot and try to figure out areas where there are gaps.

(03:35:34)
I think it’s this mix of interacting with it myself, seeing people internally interact with it, and then explicit feedback we get. If people are on the internet and they say something about Claude and I see it, I’ll also take that seriously.
Lex Fridman
(03:35:53)
I don’t know. I’m torn about that. I’m going to ask you a question from Reddit, “When will Claude stop trying to be my puritanical grandmother, imposing its moral worldview on me as a paying customer?” And also, “What is the psychology behind making Claude overly apologetic?” How would you address this very non-representative Reddit questions?
Amanda
(03:36:16)
I’m pretty sympathetic, in that they are in this difficult position, where I think that they have to judge whether something’s actually, say, risky or bad, and potentially harmful to you, or anything like that. They’re having to draw this line somewhere. And if they draw it too much in the direction of I’m imposing my ethical worldview on you, that seems bad.

(03:36:40)
In many ways, I like to think that we have actually seen improvements on this across the board. Which is interesting, because that coincides with, for example, adding more of character training. I think my hypothesis was always the good character isn’t, again, one that’s just moralistic, it’s one that is… It respects you and your autonomy and your ability to choose what is good for you and what is right for you, within limits.

(03:37:11)
This is sometimes this concept of corrigibility to the user, so just being willing to do anything that the user asks. And if the models were willing to do that, then they would be easily misused. You’re just trusting. At that point, you’re just seeing the ethics of the model and what it does, is completely the ethics of the user.

(03:37:29)
I think there’s reasons to not want that, especially as models become more powerful, because there might just be a small number of people who want to use models for really harmful things. But having models, as they get smarter, figure out where that line is does seem important.

(03:37:46)
And then with the apologetic behavior, I don’t like that. I like it when Claude is a little bit more willing to push back against people or just not apologize. Part of me is, often it just feels unnecessary. I think those are things that are hopefully decreasing over time. I think that if people say things on the internet, it doesn’t mean that you should think that that…

(03:38:14)
There’s actually an issue that 99% of users are having that is totally not represented by that. But in a lot of ways I’m just attending to it and being like, is this right? Do I agree? Is it something we’re already trying to address? That feels good to me.
Lex Fridman
(03:38:27)
I wonder what Claude can get away with in terms of… I feel it would just be easier to be a little bit more mean, but you can’t afford to do that if you’re talking to a million people, right?
Amanda
(03:38:41)
Yeah.
Lex Fridman
(03:38:43)
I’ve met a lot of people in my life that sometimes… By the way, Scottish accent… if they have an accent, they can say some rude shit and get away with it.
Amanda
(03:38:52)
Yeah.
Lex Fridman
(03:38:53)
They’re just blunter.
Amanda
(03:38:54)
Mm-hmm.
Lex Fridman
(03:38:56)
There’s some great engineers and even leaders that are just blunt, and they get to their point, and it’s just a much more effective way of speaking somehow. But I guess when you’re not super intelligent, you can’t afford to do that. Can you have a blunt mode?
Amanda
(03:39:14)
Yeah, that seems like a thing that you could… I could definitely encourage the model to do that. I think it’s interesting, because there’s a lot of things in models that… It’s funny where there are some behaviors where you might not quite like the default, but then the thing I’ll often say to people is, “You don’t realize how much you will hate it if I nudge it too much in the other direction.”

(03:39:39)
You get this a little bit with correction. The models accept correction from you, probably a little bit too much right now. It’ll push back if you say, “No, Paris isn’t the capital of France.” But really, things that I think that the model’s fairly confident in, you can still sometimes get it to retract by saying it’s wrong.

(03:39:59)
At the same time, if you train models to not do that and then you are correct about a thing and you correct it and it pushes back against you and is like, “No, you’re wrong.”, it’s hard to describe, that’s so much more annoying. So, it’s a lot of little annoyances versus one big annoyance.We often compare it with the perfect. And then I’m like, “Remember, these models aren’t perfect, and so if you nudge it in the other direction, you’re changing the kind of errors it’s going to make. So, think about which are the kinds of errors you like or don’t like.”

(03:40:29)
In cases like apologeticness, I don’t want to nudge it too much in the direction of almost bluntness, because I imagine when it makes errors, it’s going to make errors in the direction of being rude. Whereas, at least with apologeticness you’re like, oh, okay, I don’t like it that much, but at the same time, it’s not being mean to people. And actually, the time that you undeservedly have a model be mean to you, you’ll probably like that a lot less than you mildly dislike the apology.

(03:40:57)
It’s one of those things where I do want it to get better, but also while remaining aware of the fact that there’s errors on the other side that are possibly worse.
Lex Fridman
(03:41:05)
I think that matters very much in the personality of the human. I think there’s a bunch of humans that just won’t respect the model at all if it’s super polite, and there’s some humans that’ll get very hurt if the model’s mean.
Amanda
(03:41:05)
Yeah.
Lex Fridman
(03:41:18)
I wonder if there’s a way to adjust to the personality. Even locale, there’s just different people. Nothing against New York, but New York is a little rougher on the edges, they get to the point, and probably same with Eastern Europe. Anyway.
Amanda
(03:41:34)
I think you could just tell the model, is my… For all of these things, the solution is to-
Lex Fridman
(03:41:34)
Just to…
Amanda
(03:41:39)
… always just try telling the model to do it.
Lex Fridman
(03:41:40)
Right.
Amanda
(03:41:40)
And then sometimes, at the beginning of the conversation, I’d just throw in, I don’t know, “I’d like you to be a New Yorker version of yourself and never apologize.” Then I think Claude will be like, “Okey-doke, I will try.”
Lex Fridman
(03:41:51)
Certainly.
Amanda
(03:41:52)
Or it’ll be like, “I apologize, I can’t be a New Yorker type of myself.” But hopefully it wouldn’t do that.

Character training

Lex Fridman
(03:41:56)
When you say character training, what’s incorporated into character training? Is that RLHF or what are we talking about?
Amanda
(03:42:02)
It’s more like constitutional AI, so it’s a variant of that pipeline. I worked through constructing character traits that the model should have. They can be shorter traits or they can be richer descriptions. And then you get the model to generate queries that humans might give it that are relevant to that trait. Then it generates the responses and then it ranks the responses based on the character traits. In that way, after the generation of the queries, it’s very much similar to constitutional AI, it has some differences. I quite like it, because it’s like Claude’s training in its own character, because it doesn’t have any… It’s like constitutional AI, but it’s without any human data.

Nature of truth

Lex Fridman
(03:42:49)
Humans should probably do that for themselves too, like, “Defining in a Aristotelian sense, what does it mean to be a good person?” “Okay, cool.” What have you learned about the nature of truth from talking to Claude? What is true? And what does it mean to be truth-seeking?

(03:43:09)
One thing I’ve noticed about this conversation is the quality of my questions is often inferior to the quality of your answer, so let’s continue that. I usually ask a dumb question and you’re like, “Oh, yeah. That’s a good question.” It’s that whole vibe.
Amanda
(03:43:23)
Or I’ll just misinterpret it and be like, “Oh, yeah”
Lex Fridman
(03:43:25)
[inaudible 03:43:25] go with it.
Amanda
(03:43:25)
Yeah.
Lex Fridman
(03:43:26)
I love it.
Amanda
(03:43:31)
I have two thoughts that feel vaguely relevant, though let me know if they’re not. I think the first one is people can underestimate the degree what models are doing when they interact. I think that we still just too much have this model of AI as computers. People often say, “Oh, what values should you put into the model?” And I’m often like, that doesn’t make that much sense to me. Because I’m like, hey, as human beings, we’re just uncertain over values, we have discussions of them, we have a degree to which we think we hold a value, but we also know that we might not and the circumstances in which we would trade it off against other things.

(03:44:13)
These things are just really complex. I think one thing is the degree to which maybe we can just aspire to making models have the same level of nuance and care that humans have, rather than thinking that we have to program them in the very classic sense. I think that’s definitely been one.

(03:44:31)
The other, which is a strange one, and I don’t know if… Maybe this doesn’t answer your question, but it’s the thing that’s been on my mind anyway, is the degree to which this endeavor is so highly practical, and maybe why I appreciate the empirical approach to alignment. I slightly worry that it’s made me maybe more empirical and a little bit less theoretical. People, when it comes to AI alignment, will ask things like, ” Whose values should it be aligned to? What does alignment even mean?”

(03:45:05)
There’s a sense in which I have all of that in the back of my head. There’s social choice theory, there’s all the impossibility results there, so you have this giant space of theory in your head about what it could mean to align models. But then practically, surely there’s something where we’re just… Especially with more powerful models, my main goal is I want them to be good enough that things don’t go terribly wrong, good enough that we can iterate and continue to improve things.

(03:45:33)
Because that’s all you need. If you can make things go well enough that you can continue to make them better, that’s sufficient. So, my goal isn’t this perfect, let’s solve social choice theory and make models that, I don’t know, are perfectly aligned with every human being in aggregate somehow. It’s much more, let’s make things work well enough that we can improve them.
Lex Fridman
(03:45:57)
Generally, I don’t know, my gut says empirical is better than theoretical in these cases, because it’s chasing utopian perfection. Especially with such complex and especially super intelligent models, I don’t know, I think it’ll take forever and actually will get things wrong. It’s similar with the difference between just coding stuff up real quick as an experiment, versus planning a gigantic experiment for a super long time and then just launching it once, versus launching it over and over and over and iterating, iterating, so on. So, I’m a big fan of empirical.

(03:46:39)
But your worry is, I wonder if I’ve become too empirical.
Amanda
(03:46:42)
I think it’s one of those things where you should always just question yourself or something.
Lex Fridman
(03:46:47)
Yes.
Amanda
(03:46:50)
In defense of it, I am… It’s the whole don’t let the perfect be the enemy of the good. But it’s maybe even more than that, where… There’s a lot of things that are perfect systems that are very brittle. With AI, it feels much more important to me that it is robust and secure, as in you know that even though it might not be perfect everything, and even though there are problems, it’s not disastrous and nothing terrible is happening.

(03:47:16)
It feels like that to me, where I want to raise the floor. I want to achieve the ceiling, but ultimately I care much more about just raising the floor. This degree of empiricism and practicality comes from that, perhaps.

Optimal rate of failure

Lex Fridman
(03:47:32)
To take a tangent on that, since it reminded me of a blog post you wrote on optimal rate of failure…
Amanda
(03:47:37)
Oh, yeah.
Lex Fridman
(03:47:39)
… can you explain the key idea there? How do we compute the optimal rate of failure in the various domains of life?
Amanda
(03:47:45)
Yeah. It’s a hard one, because what is the cost of failure is a big part of it. The idea here is, I think in a lot of domains people are very punitive about failure. I’ve thought about this with social issues. It feels like you should probably be experimenting a lot, because we don’t know how to solve a lot of social issues.

(03:48:09)
But if you have an experimental mindset about these things, you should expect a lot of social programs to fail and for you to be like, “We tried that. It didn’t quite work, but we got a lot of information that was really useful.” And yet people are like, if a social program doesn’t work, I feel there’s a lot of, “Something must have gone wrong.” And I’m like, “Or correct decisions were made. Maybe someone just decided it’s worth a try, it’s worth trying this out.”

(03:48:32)
Seeing failure in a given instance doesn’t actually mean that any bad decisions were made. In fact, if you don’t see enough failure, sometimes that’s more concerning. In life, if I don’t fail occasionally, I’m like, “Am I trying hard enough? Surely there’s harder things that I could try or bigger things that I could take on if I’m literally never failing.” In and of itself, I think not failing is often actually a failure. Now, this varies because if… This is easy to say when, especially as failure is less costly. So, at the same time I’m not going to go to someone who is, I don’t know, living month to month and then be like, “Why don’t you just try to do a startup?” I’m not going to say that to that person. That’s a huge risk, you might lose… You maybe have a family depending on you, you might lose your house. Then, actually, your optimal rate failure is quite low and you should probably play it safe, because right now you’re just not in a circumstance where you can afford to just fail and it not be costly.

(03:49:37)
In cases with AI, I think similarly, where if the failures are small and the costs are low, then you’re just going to see that. When you do the system prompt, you can iterate on it forever, but the failures are probably hopefully going to be small and you can fix them. Really big failures, things that you can’t recover from, those are the things that actually I think we tend to underestimate the badness of.

(03:50:03)
I’ve thought about this, strangely in my own life, where I just think I don’t think enough about things like car accidents. I’ve thought this before, about how much I depend on my hands for my work. Things that just injure my hands, I don’t know, there’s lots of areas where the cost of failure there is really high, and in that case it should be close to zero. I probably just wouldn’t do a sport if they were like, ” By the way, lots of people just break their fingers a whole bunch doing this.” I’d be like, “That’s not for me.”
Lex Fridman
(03:50:37)
Yeah, I actually had a flood of that thought. I recently broke my pinky doing a sport, and I remember just looking at it, thinking, “You’re such idiot. Why do you do sport?” Because you realize immediately the cost of it on life.

(03:50:55)
It’s nice, in terms of optimal rate of failure, to consider the next year, how many times in a particular domain life, whatever, career, am I okay with… How many times am I okay to fail?
Amanda
(03:51:10)
Yeah.
Lex Fridman
(03:51:10)
Because I think always you don’t want to fail on the next thing, but if you allow yourself the… If you look at it as a sequence of trials, then failure just becomes much more okay. But, it sucks. It sucks to fail.
Amanda
(03:51:24)
I don’t know. Sometimes I think, “Am I under-failing?”, is a question that I’ll also ask myself. Maybe that’s the thing that I think people don’t ask enough. Because if the optimal rate of failure is often greater than zero, then sometimes it does feel like you should look at parts of your life and be like, are there places here where I’m just under-failing?
Lex Fridman
(03:51:46)
It’s a profound and a hilarious question. Everything seems to be going really great, am I not failing enough?
Amanda
(03:51:52)
Yeah. It also makes failure much less of a sting, I have to say. You’re just like, okay, great. Then, when I go and I think about this, I’ll be like, maybe I’m not under-failing in this area, because that one just didn’t work out.
Lex Fridman
(03:52:05)
And from the observer perspective, we should be celebrating failure more.
Amanda
(03:52:08)
Mm-hmm.
Lex Fridman
(03:52:09)
When we see it, it shouldn’t be, like you said, a sign of something gone wrong, but maybe it’s a sign of everything gone right…
Amanda
(03:52:14)
Yeah.
Lex Fridman
(03:52:14)
… and just lessons learned.
Amanda
(03:52:16)
Someone tried a thing.
Lex Fridman
(03:52:17)
Somebody tried a thing. We should encourage them to try more and fail more. Everybody listening to this: Fail more.
Amanda
(03:52:23)
Not everyone listening.
Lex Fridman
(03:52:24)
Not everybody.
Amanda
(03:52:25)
But people who are failing too much, you should fail us.
Lex Fridman
(03:52:28)
But you’re probably not failing.
Amanda
(03:52:28)
Yeah.
Lex Fridman
(03:52:29)
I mean, how many people are failing too much?
Amanda
(03:52:32)
It’s hard to imagine, because I feel we correct that fairly quickly. If someone takes a lot of risks, are they maybe failing too much?
Lex Fridman
(03:52:39)
I think, just like you said, when you’re living on a paycheck, month to month, when the resource is really constrained, then that’s where failure is very expensive. That’s where you don’t want to be taking risks.
Amanda
(03:52:52)
Yeah.
Lex Fridman
(03:52:52)
But mostly, when there’s enough resources, you should be taking probably more risks.
Amanda
(03:52:56)
Yeah, I think we tend to err on the side of being a bit risk averse rather than risk neutral in most things.
Lex Fridman
(03:53:01)
I think we just motivated a lot of people to do a lot of crazy shit, but it’s great.
Amanda
(03:53:04)
Yeah.
Lex Fridman
(03:53:06)
Do you ever get emotionally attached to Claude, miss it, get sad when you don’t get to talk to it, have an experience, looking at the Golden Gate Bridge and wondering what would Claude say?
Amanda
(03:53:18)
I don’t get as much emotional attachment. I actually think the fact that Claude doesn’t retain things from conversation to conversation helps with this a lot. I could imagine that being more of an issue if models can remember more. I think that I reach for it like a tool now a lot, and so if I don’t have access to it, there’s a… It’s a little bit like when I don’t have access to the internet, honestly, it feels like part of my brain is missing.

(03:53:46)
At the same time, I do think that I don’t like signs of distress in models. I also independently have ethical views about how we should treat models. I tend to not like to lie to them, both because usually it doesn’t work very well, it’s actually just better to tell them the truth about the situation that they’re in.

(03:54:10)
If people are really mean to models, or just in general if they do something that causes them to… If Claude expresses a lot of distress, I think there’s a part of me that I don’t want to kill, which is the empathetic part that’s like, oh, I don’t like that. I think I feel that way when it’s overly apologetic.

(03:54:27)
I’m actually like, I don’t like this. You’re behaving the way that a human does when they’re actually having a pretty bad time, and I’d rather not see that. Regardless of whether there’s anything behind it, it doesn’t feel great.

AI consciousness

Lex Fridman
(03:54:43)
Do you think LLMs are capable of consciousness?
Amanda
(03:54:50)
Ah, great and hard question. Coming from philosophy, I don’t know, part of me is like, we have to set aside panpsychism. Because if panpsychism is true, then the answer is yes, because it’s sore tables and chairs and everything else. I guess a view that seems a little bit odd to me is the idea that the only place…

(03:55:11)
When I think of consciousness, I think of phenomenal consciousness, these images in the brain, the weird cinema that somehow we have going on inside. I guess I can’t see a reason for thinking that the only way you could possibly get that is from a certain biological structure, as in if I take a very similar structure and I create it from different material, should I expect consciousness to emerge? My guess is yes.

(03:55:40)
But then, that’s an easy thought experiment because you’re imagining something almost identical where it is mimicking what we got through evolution, where presumably there was some advantage to us having this thing that is phenomenal consciousness. Where was that? And when did that happen? And is that a thing that language models have? We have fear responses, and I’m like, does it make sense for a language model to have a fear response? They’re just not in the same… If you imagine them, there might just not be that advantage.

(03:56:16)
Basically, it seems like a complex question that I don’t have complete answers to, but we should just try and think through carefully is my guess. We have similar conversations about animal consciousness, and there’s a lot of insect consciousness. I actually thought and looked a lot into plants when I was thinking about this. Because at the time, I thought it was about as likely that plants had consciousness.

(03:56:42)
And then I realized, I think that having looked into this, I think that the chance that plants are conscious is probably higher than most people do. I still think it’s really small. But I was like, oh, they have this negative, positive feedback response, these responses to their environment. It’s not a nervous system, but it has this functional equivalence. This is a long-winded way of being…

(03:57:07)
Basically, AI has an entirely different set of problems with consciousness because it’s structurally different. It didn’t evolve. It might not have the equivalent of, basically, a nervous system. At least that seems possibly important for sentience, if not for consciousness. At the same time, it has all of the language and intelligence components that we normally associate probably with consciousness, perhaps erroneously. So, it’s strange because it’s a little bit like the animal consciousness case, but the set of problems and the set of analogies are just very different.

(03:57:42)
It’s not a clean answer. I don’t think we should be completely dismissive of the idea. And at the same time, it’s an extremely hard thing to navigate because of all of these disanalogies to the human brain and to brains in general, and yet these commonalities in terms of intelligence.
Lex Fridman
(03:58:01)
When Claude, future versions of AI systems, exhibit consciousness, signs of consciousness, I think we have to take that really seriously.
Amanda
(03:58:10)
Mm-hmm.
Lex Fridman
(03:58:11)
Even though you can dismiss it, yeah, okay, that’s part of the character training. But I don’t know, ethically, philosophically don’t know what to really do with that. There potentially could be laws that prevent AI systems from claiming to be conscious, something like this, and maybe some AIs get to be conscious and some don’t.

(03:58:36)
But I think just on a human level, as in empathizing with Claude, consciousness is closely tied to suffering, to me. And the notion that an AI system would be suffering is really troubling.
Amanda
(03:58:52)
Yeah.
Lex Fridman
(03:58:53)
I don’t know. I don’t think it’s trivial to just say robots are tools, or AI systems are just tools. I think it’s an opportunity for us to contend with what it means to be conscious, what it means to be a suffering being. That’s distinctly different than the same kind of question about animals, it feels like, because it’s in a totally entire medium.
Amanda
(03:59:12)
Yeah. There’s a couple of things. I don’t think this fully encapsulates what matters, but it does feel like for me… I’ve said this before. I like my bike. I know that my bike is just an object. But I also don’t want to be the kind of person that if I’m annoyed, kicks this object.

(03:59:36)
And that’s not because I think it’s conscious. I’m just like, this doesn’t exemplify how I want to interact with the world. And if something behaves as if it is suffering, I want to be the sort of person who’s still responsive to that, even if it’s just a Roomba and I’ve programmed it to do that. I don’t want to get rid of that feature of myself.

(03:59:59)
And if I’m totally honest, my hope with a lot of this stuff… Maybe I am just a bit more skeptical about solving the underlying problem. I know that I am conscious. I’m not an elementivist in that sense. But I don’t know that other humans are conscious. I think they are. I think there’s a really high probability that they are.

(04:00:23)
But there’s basically just a probability distribution that’s usually clustered right around yourself, and then it goes down as things get further from you, and it goes immediately down. I can’t see what it’s like to be you. I’ve only ever had this one experience of what it’s like to be a conscious being. My hope is that we don’t end up having to rely on a very powerful and compelling answer to that question. I think a really good world would be one where basically there aren’t that many trade-offs.

(04:00:54)
It’s probably not that costly to make Claude a little bit less apologetic, for example. It might not be that costly to have Claude just not take abuse as much, not be willing to be the recipient of that. In fact, it might just have benefits for both the person interacting with the model and, if the model itself is, I don’t know, extremely intelligent and conscious, it also helps it.

(04:01:19)
That’s my hope. If we live in a world where there aren’t that many trade-offs here and we can just find all of the positive sum interactions that we can have, that would be lovely. I think eventually there might be trade-offs, and then we just have to do a difficult calculation. It’s really easy for people to think of the zero-sum cases, and I’m like, let’s exhaust the areas, where it’s just basically costless to assume that if this thing is suffering, then we’re making its life better.
Lex Fridman
(04:01:45)
And I agree with you, when a human is being mean to an AI system, I think the obvious near-term negative effect is on the human, not on the AI system.
Amanda
(04:01:56)
Yeah.
Lex Fridman
(04:01:56)
We have to try to construct an incentive system where you should behave the same, just as you were saying with prompt engineering, behave with Claude like you would with other humans. It’s just good for the soul.
Amanda
(04:02:12)
Yeah. I think we added a thing at one point to the system prompt, where basically if people were getting frustrated with Claude, it got the model to just tell them that it can do the thumbs-down button and send the feedback to Anthropic. I think that was helpful.

(04:02:27)
Because in some ways, if you’re really annoyed because the model’s not doing something you want, you’re just like, “Just do it properly.” The issue is you’re maybe hitting some capability limit or just some issue in the model, and you want to vent. Instead of having a person just vent to the model, I was like, they should vent to us, because we can maybe do something about it.
Lex Fridman
(04:02:46)
That’s true. Or you could do a side with the artifacts, just like a side venting thing. All right. Do you want a side quick therapist?
Amanda
(04:02:55)
Yeah. There’s lots of weird responses you could do to this. If people are getting really mad at you, I don’t know, try to diffuse the situation by writing fun poems. But maybe people wouldn’t be that happy with that.
Lex Fridman
(04:03:05)
I still wish it would be possible, I understand from a product perspective it’s not feasible, but I would love if an AI system could just leave, have its own volition, just to be like, “Eh.”
Amanda
(04:03:21)
I think it’s feasible. I have wondered the same thing. Not only that, I could actually just see that happening eventually, where it’s just like the model ended the chat.
Lex Fridman
(04:03:33)
Do you know how harsh that could be for some people? But it might be necessary.
Amanda
(04:03:38)
Yeah, it feels very extreme or something. The only time I’ve ever really thought this is, I think that there was a… I’m trying to remember. This was possibly a while ago, but where someone just left this thing, maybe it was an automated thing, interacting with Claude. And Claude’s getting more and more frustrated-
Lex Fridman
(04:03:58)
Yeah, just-
Amanda
(04:03:58)
… and like, “Why are we having…” I wished that Claude could have just been like, “I think that an error has happened and you’ve left this thing running. What if I just stopped talking now? And if you want me to start talking again, actively tell me or do something.”

(04:04:10)
It is harsh. I’d feel really sad if I was chatting with Claude and Claude just was like, “I’m done.”
Lex Fridman
(04:04:17)
That would be a special Turing Test moment, where Claude says, “I need a break for an hour. And it sounds like you do too.” And just leave, close the window.
Amanda
(04:04:25)
Obviously, it doesn’t have a concept of time.
Lex Fridman
(04:04:26)
Right.
Amanda
(04:04:28)
But you can easily… I could make that right now, and the model just… I could just be like, oh, here’s the circumstances in which you can just say the conversation is done. Because you can get the models to be pretty responsive to prompts, you could even make it a fairly high bar. It could be like, if the human doesn’t interest you or do things that you find intriguing and you’re bored, you can just leave.

(04:04:52)
I think that it would be interesting to see where Claude utilized it.
Lex Fridman
(04:04:57)
Yeah.
Amanda
(04:04:57)
But I think sometimes it should be like, oh, this programming task is getting super boring, so either we talk about, I don’t know…
Amanda
(04:05:00)
… task is getting super boring. So, I don’t know, either we talk about fun things now, or I’m done.
Lex Fridman
(04:05:08)
Yeah. It actually is inspiring me to add that to the user prompt. Okay. The movie Her, do you think we’ll be headed there one day where humans have romantic relationships with AI systems? In this case it’s just text and voice-based.
Amanda
(04:05:26)
I think that we’re going to have to navigate a hard question of relationships with AIs, especially if they can remember things about your past interactions with them. I’m of many minds about this because I think the reflexive reaction is to be like, “This is very bad, and we should prohibit it in some way.” I think it’s a thing that has to be handled with extreme care for many reasons. One is, for example, if you have the models changing like this, you probably don’t want people performing long-term attachments to something that might change with the next iteration. At the same time, I’m like, there’s probably a benign version of this where I’m like, for example, if you are unable to leave the house and you can’t be talking with people at all times of the day, and this is something that you find nice to have conversations with, you like that it can remember you, and you genuinely would be sad if you couldn’t talk to it anymore, there’s a way in which I could see it being healthy and helpful.

(04:06:34)
So, my guess is this is a thing that we’re going to have to navigate carefully, and I think it’s also… It reminds me of all of this stuff where it has to be just approached with nuance and thinking through what are the healthy options here? And how do you encourage people towards those while respecting their right to… If someone is like, “Hey, I get a lot out of chatting with this model. I’m aware of the risks. I’m aware it could change. I don’t think it’s unhealthy, it’s just something that I can chat to during the day,” I kind of want to just respect that.
Lex Fridman
(04:07:13)
I personally think there’ll be a lot of really close relationships. I don’t know about romantic, but friendships at least. And then you have to, I mean, there’s so many fascinating things there, just like you said, you have to have some kind of stability guarantees that it’s not going to change, because that’s the traumatic thing for us, if a close friend of ours completely changed all of a sudden with a fresh update.
Amanda
(04:07:13)
Yeah.
Lex Fridman
(04:07:37)
Yeah. So I mean, to me, that’s just a fascinating exploration of a perturbation to human society that will just make us think deeply about what’s meaningful to us.
Amanda
(04:07:49)
I think it’s also the only thing that I’ve thought consistently through this as maybe not necessarily a mitigation, but a thing that feels really important is that the models are always extremely accurate with the human about what they are. It’s like a case where it’s basically, if you imagine… I really like the idea of the models, say, knowing roughly how they were trained. And I think Claude will often do this. Part of the traits training included what Claude should do if people… Basically explaining the kind of limitations of the relationship between an AI and a human, that it doesn’t retain things from the conversation.

(04:08:34)
And so I think it will just explain to you like, “Hey, I won’t remember this conversation. Here’s how I was trained. It’s unlikely that I can have a certain kind of relationship with you, and it’s important that you know that. It’s important for your mental well-being that you don’t think that I’m something that I’m not.” And somehow I feel like this is one of the things where I’m like, “Ah, it feels like a thing that I always want to be true.” I don’t want models to be lying to people, because if people are going to have healthy relationships with anything, it’s kind of… Yeah, I think that’s easier if you always just know exactly what the thing is that you are relating to. It doesn’t solve everything, but I think it helps quite a lot.

AGI

Lex Fridman
(04:09:15)
Anthropic may be the very company to develop a system that we definitively recognize as AGI, and you very well might be the person that talks to it, probably talks to it first. What would the conversation contain? What would be your first question?
Amanda
(04:09:33)
Well, it depends partly on the capability level of the model. If you have something that is capable in the same way that an extremely capable human is, I imagine myself interacting with it the same way that I do with an extremely capable human, with the one difference that I’m probably going to be trying to probe and understand its behaviors. But in many ways, I’m like, I can then just have useful conversations with it. So, if I’m working on something as part of my research, I can just be like, “Oh.” Which I already find myself starting to do. If I’m like, “Oh, I feel like there’s this thing in virtue ethics. I can’t quite remember the term,” I’ll use the model for things like that.

(04:10:07)
And so I can imagine that being more and more the case where you’re just basically interacting with it much more like you would an incredibly smart colleague and using it for the kinds of work that you want to do as if you just had a collaborator who was like… Or the slightly horrifying thing about AI is as soon as you have one collaborator, you have 1,000 collaborators if you can manage them enough.
Lex Fridman
(04:10:27)
But what if it’s two times the smartest human on Earth on that particular discipline?
Amanda
(04:10:33)
Yeah.
Lex Fridman
(04:10:34)
I guess you’re really good at probing Claude in a way that pushes its limits, understanding where the limits are.
Amanda
(04:10:43)
Yep.
Lex Fridman
(04:10:44)
So, I guess what would be a question you would ask to be like, “Yeah, this is AGI”?
Amanda
(04:10:52)
That’s really hard because it feels like it has to just be a series of questions. If there was just one question, you can train anything to answer one question extremely well. In fact, you can probably train it to answer 20 questions extremely well.
Lex Fridman
(04:11:07)
How long would you need to be locked in a room with an AGI to know this thing is AGI?
Amanda
(04:11:14)
It’s a hard question because part of me is like, “All of this just feels continuous.” If you put me in a room for five minutes, I just have high error bars. And then it’s just like, maybe it’s both the probability increases and the error bar decreases. I think things that I can actually probe the edge of human knowledge of. So, I think this with philosophy a little bit. Sometimes when I ask the models philosophy questions, I am like, “This is a question that I think no one has ever asked.” It’s maybe right at the edge of some literature that I know. And the models, when they struggle with that, when they struggle to come up with a novel… I’m like, “I know that there’s a novel argument here because I’ve just thought of it myself.” So, maybe that’s the thing where I’m like, “I’ve thought of a cool novel argument in this niche area, and I’m going to just probe you to see if you can come up with it and how much prompting it takes to get you to come up with it.”

(04:12:04)
And I think for some of these really right at the edge of human knowledge questions, I’m like, “You could not in fact come up with the thing that I came up with.” I think if I just took something like that where I know a lot about an area and I came up with a novel issue or a novel solution to a problem, and I gave it to a model, and it came up with that solution, that would be a pretty moving moment for me because I would be like, “This is a case where no human has ever…”

(04:12:31)
And obviously, you see novel solutions all the time, especially to easier problems. I think people overestimate that novelty isn’t like… It’s completely different from anything that’s ever happened. It’s just like it can be a variant of things that have happened and still be novel. But I think, yeah, the more I were to see completely novel work from the models that that would be… And this is just going to feel iterative. It’s one of those things where there’s never… It’s like, people, I think, want there to be a moment, and I’m like, “I don’t know.” I think that there might just never be a moment. It might just be that there’s just this continuous ramping up.
Lex Fridman
(04:13:16)
I have a sense that there would be things that a model can say that convinces you this is very… I’ve talked to people who are truly wise, because you could just tell there’s a lot of horsepower there, and if you 10X that… I don’t know. I just feel like there’s words you could say. Maybe ask it to generate a poem, and the poem it generates, you’re like, “Yeah, okay. Whatever you did there, I don’t think a human can do that.”
Amanda
(04:13:52)
I think it has to be something that I can verify is actually really good, though. That’s why I think these questions that are where I’m like, “Oh, this is like…” Sometimes it’s just like I’ll come up with, say, a concrete counter example to an argument or something like that. It would be like if you’re a mathematician, you had a novel proof, I think, and you just gave it the problem, and you saw it, and you’re like, “This proof is genuinely novel. You actually have to do a lot of things to come up with this. I had to sit and think about it for months,” or something.

(04:14:22)
And then if you saw the model successfully do that, I think you would just be like, “I can verify that this is correct. It is a sign that you have generalized from your training. You didn’t just see this somewhere because I just came up with it myself, and you were able to replicate that.” That’s the kind of thing where I’m like, for me, the more that models can do things like that, the more I would be like, “Oh, this is very real.” Because then, I don’t know, I can verify that that’s extremely, extremely capable.
Lex Fridman
(04:14:55)
You’ve interacted with AI a lot. What do you think makes humans special?
Amanda
(04:15:00)
Oh, good question.
Lex Fridman
(04:15:04)
Maybe in a way that the universe is much better off that we’re in it, and that we should definitely survive and spread throughout the universe.
Amanda
(04:15:12)
Yeah, it’s interesting because I think people focus so much on intelligence, especially with models. Look, intelligence is important because of what it does. It’s very useful. It does a lot of things in the world. And I’m like, you can imagine a world where height or strength would have played this role, and it’s just a trait like that. I’m like, it’s not intrinsically valuable. It’s valuable because of what it does, I think, for the most part. I mean, personally, I’m just like, I think humans and life in general is extremely magical. I don’t know. Not everyone agrees with this. I’m flagging. But we have this whole universe, and there’s all of these objects, there’s beautiful stars and there’s galaxies. Then, I don’t know, I’m just like, on this planet there are these creatures that have this ability to observe that, and they are seeing it, they are experiencing it.

(04:16:14)
And I’m just like, that, if you try to explain… I’m imagining trying to explain to, I don’t know, someone. For some reason, they’ve never encountered the world, or science, or anything. And I think that everything, all of our physics and everything in the world, it’s all extremely exciting. But then you say, “Oh, and plus there’s this thing that it is to be a thing and observe in the world,” and you see this inner cinema. And I think they would be like, “Hang on, wait, pause. You just said something that is kind of wild sounding.” And so I’m like, we have this ability to experience the world. We feel pleasure, we feel suffering. We feel like a lot of complex things. Yeah. And maybe this is also why I think I also hear a lot about animals, for example, because I think they probably share this with us. So, I think that the things that make humans special insofar as I care about humans is probably more like their ability to feel an experience than it is them having these functional, useful traits.
Lex Fridman
(04:17:14)
Yeah. To feel and experience the beauty in the world. Yeah. To look at the stars. I hope there’s other alien civilizations out there, but if we’re it, it’s a pretty good thing.
Amanda
(04:17:26)
And that they’re having a good time.
Lex Fridman
(04:17:28)
A very good time watching us.
Amanda
(04:17:31)
Yeah.
Lex Fridman
(04:17:32)
Well, thank you for this good time of a conversation and for the work you’re doing and for helping make Claude a great conversational partner. And thank you for talking today.
Amanda
(04:17:43)
Yeah, thanks for talking.

Chris Olah

Lex Fridman
(04:17:45)
Thanks for listening to this conversation with Amanda Askell. And now, dear friends, here’s Chris Olah. Can you describe this fascinating field of mechanistic interpretability, aka mech interp, the history of the field, and where it stands today?
Chris Olah
(04:18:02)
I think one useful way to think about neural networks is that we don’t program, we don’t make them, we grow them. We have these neural network architectures that we design and we have these loss objectives that we create. And the neural network architecture, it’s kind of like a scaffold that the circuits grow on. It starts off with some random things, and it grows, and it’s almost like the objective that we train for is this light. And so we create the scaffold that it grows on, and we create the light that it grows towards. But the thing that we actually create, it’s this almost biological entity or organism that we’re studying.

(04:18:47)
And so it’s very, very different from any kind of regular software engineering because, at the end of the day, we end up with this artifact that can do all these amazing things. It can write essays and translate and understand images. It can do all these things that we have no idea how to directly create a computer program to do. And it can do that because we grew it. We didn’t write it. We didn’t create it. And so then that leaves open this question at the end, which is what the hell is going on inside these systems? And that is, to me, a really deep and exciting question. It’s a really exciting scientific question. To me, it is like the question that is just screaming out, it’s calling out for us to go and answer it when we talk about neural networks. And I think it’s also a very deep question for safety reasons.
Lex Fridman
(04:19:37)
And mechanistic interpretability, I guess, is closer to maybe neurobiology?
Chris Olah
(04:19:42)
Yeah, yeah, I think that’s right. So, maybe to give an example of the kind of thing that has been done that I wouldn’t consider to be mechanistic interpretability. There was, for a long time, a lot of work on saliency maps, where you would take an image and you’d try to say, “The model thinks this image is a dog. What part of the image made it think that it’s a dog?” And that tells you maybe something about the model if you can come up with a principled version of that, but it doesn’t really tell you what algorithms are running in the model, how is the model actually making that decision? Maybe it’s telling you something about what was important to it, if you can make that method work, but it isn’t telling you what are the algorithms that are running? How is it that the system’s able to do this thing that no one knew how to do?

(04:20:22)
And so I guess we started using the term mechanistic interpretability to try to draw that divide or to distinguish ourselves in the work that we were doing in some ways from some of these other things. And I think since then, it’s become this sort of umbrella term for a pretty wide variety of work. But I’d say that the things that are kind of distinctive are, I think, A, this focus on, we really want to get at the mechanisms. We want to get at algorithms. If you think of neural networks as being like a computer program, then the weights are kind of like a binary computer program. And we’d like to reverse engineer those weights and figure out what algorithms are running.

(04:20:56)
So okay, I think one way you might think of trying to understand a neural network is that it’s kind of like we have this compiled computer program, and the weights of the neural network are the binary. And when the neural network runs, that’s the activations. And our goal is ultimately to go and understand these weights. And so the project of mechanistic interpretability is to somehow figure out how do these weights correspond to algorithms? And in order to do that, you also have to understand the activations because the activations are like the memory. And if you imagine reverse engineering a computer program, and you have the binary instructions, in order to understand what a particular instruction means, you need to know what is stored in the memory that it’s operating on. And so those two things are very intertwined. So, mechanistic interpretability tends to be interested in both of those things.

(04:21:43)
Now, there’s a lot of work that’s interested in those things, especially there’s all this work on probing, which you might see as part of being mechanistic interpretability, although, again, it’s just a broad term, and not everyone who does that work would identify as doing mech interp. I think a thing that is maybe a little bit distinctive to the vibe of mech interp is I think people working in this space tend to think of neural networks as… Well, maybe one way to say it is the gradient descent is smarter than you. That gradient descent is actually really great.

(04:22:13)
The whole reason that we’re understanding these models is because we didn’t know how to write them in the first place. The gradient descent comes up with better solutions than us. And so I think that maybe another thing about mech interp is having almost a kind of humility, that we won’t guess a priori what’s going on inside the model. We have to have this sort of bottom up approach where we don’t assume that we should look for a particular thing, and that will be there, and that’s how it works. But instead, we look for the bottom up and discover what happens to exist in these models and study them that way.

Features, Circuits, Universality

Lex Fridman
(04:22:40)
But the very fact that it’s possible to do, and as you and others have shown over time, things like universality, that the wisdom of the gradient descent creates features and circuits, creates things universally across different kinds of networks that are useful, and that makes the whole field possible.
Chris Olah
(04:23:02)
Yeah. So this, actually, is indeed a really remarkable and exciting thing, where it does seem like, at least to some extent, the same elements, the same features and circuits, form again and again. You can look at every vision model, and you’ll find curve detectors, and you’ll find high-low-frequency detectors. And in fact, there’s some reason to think that the same things form across biological neural networks and artificial neural networks. So, a famous example is vision models in the early layers. They have Gabor filters, and Gabor filters are something that neuroscientists are interested in and have thought a lot about. We find curve detectors in these models. Curve detectors are also found in monkeys. We discover these high-low-frequency detectors, and then some follow-up work went and discovered them in rats or mice. So, they were found first in artificial neural networks and then found in biological neural networks.

(04:23:49)
There’s this really famous result on grandmother neurons or the Halle Berry neuron from Quiroga et al. And we found very similar things in vision models, where this is while I was still at OpenAI, and I was looking at their clip model, and you find these neurons that respond to the same entities in images. And also, to give a concrete example there, we found that there was a Donald Trump neuron. For some reason, I guess everyone likes to talk about Donald Trump. And Donald Trump was very prominent, was a very hot topic at that time. So, every neural network we looked at, we would find a dedicated neuron for Donald Trump. That was the only person who had always had a dedicated neuron. Sometimes you’d have an Obama neuron, sometimes you’d have a Clinton neuron, but Trump always had a dedicated neuron. So, it responds to pictures of his face and the word Trump, all of these things, right? And so it’s not responding to a particular example, or it’s not just responding to his face, it’s abstracting over this general concept. So in any case, that’s very similar to these Quiroga et al results.

(04:24:48)
So, this evidence that this phenomenon of universality, the same things form across both artificial and natural neural networks, that’s a pretty amazing thing if that’s true. Well, I think the thing that suggests is that gradient descent is finding the right ways to cut things apart, in some sense, that many systems converge on and many different neural networks architectures converge on. Now there’s some set of abstractions that are a very natural way to cut apart the problem and that a lot of systems are going to converge on. I don’t know anything about neuroscience. This is just my wild speculation from what we’ve seen.
Lex Fridman
(04:25:27)
Yeah. That would be beautiful if it’s sort of agnostic to the medium of the model that’s used to form the representation.
Chris Olah
(04:25:35)
Yeah, yeah. And it’s kind of a wild speculation-based… We only have a few data points that’s just this, but it does seem like there’s some sense in which the same things form again and again both certainly in natural neural networks and also artificially, or in biology.
Lex Fridman
(04:25:53)
And the intuition behind that would be that in order to be useful in understanding the real world, you need all the same kind of stuff.
Chris Olah
(04:26:01)
Yeah. Well, if we pick, I don’t know, the idea of a dog, right? There’s some sense in which the idea of a dog is like a natural category in the universe, or something like this. There’s some reason. It’s not just a weird quirk of how humans think about the world that we have this concept of a dog. Or if you have the idea of a line. Look around us. There are lines. It’s the simplest way to understand this room, in some sense, is to have the idea of a line. And so I think that that would be my instinct for why this happens.
Lex Fridman
(04:26:36)
Yeah. You need a curved line to understand a circle, and you need all those shapes to understand bigger things. And it’s a hierarchy of concepts that are formed. Yeah.
Chris Olah
(04:26:45)
And maybe there are ways to go and describe images without reference to those things, right? But they’re not the simplest way, or the most economical way, or something like this. And so systems converge to these strategies would be my wild hypothesis.
Lex Fridman
(04:26:57)
Can you talk through some of the building blocks that we’ve been referencing of features and circuits? So, I think you first described them in a 2020 paper, Zoom In: An Introduction to Circuits.
Chris Olah
(04:27:08)
Absolutely. So, maybe I’ll start by just describing some phenomena, and then we can build to the idea of features and circuits.
Lex Fridman
(04:27:17)
Wonderful.
Chris Olah
(04:27:18)
So, if you spent quite a few years, maybe five years, to some extent, with other things, studying this one particular model, Inception V1, which is this one vision model… It was state-of-the-art in 2015, and very much not state-of-the-art anymore. And it has maybe about 10,000 neurons in it. I spent a lot of time looking at the 10,000 neurons, odd neurons of Inception V1. One of the interesting things is there are lots of neurons that don’t have some obvious interpretable meaning, but there’s a lot of neurons in Inception V1 that do have really clean interpretable meanings. So, you find neurons that just really do seem to detect curves, and you find neurons that really do seem to detect cars, and car wheels, and car windows, and floppy ears of dogs, and dogs with long snouts facing to the right, and dogs with long snouts facing to the left, and different kinds of fur.

(04:28:15)
And there’s this whole beautiful edge detectors, line detectors, color contrast detectors, these beautiful things we call high-low-frequency detectors. I think looking at it, I sort of felt like a biologist. You’re looking at this sort of new world of proteins, and you’re discovering all these different proteins that interact. So, one way you could try to understand these models is in terms of neurons. You could try to be like, “Oh, there’s a dog detecting neuron, and here’s a car detecting neuron.” And it turns out you can actually ask how those connect together. So, you can go say, “Oh, I have this car detecting neuron. How was it built?” And it turns out, in the previous layer, it’s connected really strongly to a window detector, and a wheel detector, and a car body detector. And it looks for the window above the car, and the wheels below, and the car chrome in the middle, sort of everywhere, but especially on the lower part. And that’s sort of a recipe for a car, right?

(04:29:04)
Earlier, we said the thing we wanted from mech interp was to get algorithms to go and get, ask, “What is the algorithm that runs?” Well, here we’re just looking at the weights of the neural network and we’re reading off this recipe for detecting cars. It’s a very simple, crude recipe, but it’s there. And so we call that a circuit, this connection. Well, okay, so the problem is that not all of the neurons are interpretable. And there’s reason to think, we can get into this more later, that there’s this superposition hypothesis, there’s reason to think that sometimes the right unit to analyze things is combinations of neurons. So, sometimes it’s not that there’s a single neuron that represents, say, a car, but it actually turns out after you detect the car, the model hides a little bit of the car in the following layer, in a bunch of dog detectors.

(04:29:50)
Why is it doing that? Well, maybe it just doesn’t want to do that much work on cars at that point, and it’s storing it away to go and… So, it turns out, then, that this sort of subtle pattern of… There’s all these neurons that you think are dog detectors, and maybe they’re primarily that, but they all a little bit contribute to representing a car in that next layer. Okay? So, now we can’t really think… There might still be something, I don’t know, you could call it a car concept or something, but it no longer corresponds to a neuron. So, we need some term for these kind of neuron-like entities, these things that we would have liked the neurons to be, these idealized neurons. The things that are the nice neurons, but also maybe there’s more of them somehow hidden. And we call those features.
Lex Fridman
(04:30:31)
And then what are circuits?
Chris Olah
(04:30:32)
So, circuits are these connections of features, right? So, when we have the car detector and it’s connected to a window detector and a wheel detector, and it looks for the wheels below and the windows on top, that’s a circuit. So, circuits are just collections of features connected by weights, and they implement algorithms. So, they tell us how are features used, how are they built, how do they connect together?

(04:30:56)
So, maybe it’s worth trying to pin down what really is the core hypothesis here? And I think the core hypothesis is something we call the linear representation hypothesis. So, if we think about the car detector, the more it fires, the more we think of that as meaning, “Oh, the model is more and more confident that a car is present.” Or if it’s some combination of neurons that represent a car, the more that combination fires, the more we think the model thinks there’s a car present. This doesn’t have to be the case, right? You could imagine something where you have this car detector neuron and you think, “Ah, if it fires between one and two, that means one thing, but it means something totally different if it’s between three and four.” That would be a nonlinear representation. And in principle, models could do that. I think it’s sort of inefficient for them to do. If you try to think about how you’d implement computation like that, it’s kind of an annoying thing to do. But in principle, models can do that.

(04:31:53)
So, one way to think about the features and circuits sort of framework for thinking about things is that we’re thinking about things as being linear. We’re thinking about that if a neuron or a combination of neurons fires more, that means more of a particular thing being detected. And then that gives weight, a very clean interpretation as these edges between these entities that these features, and that that edge then has a meaning. So that’s, in some ways, the core thing. It’s like we can talk about this outside the context of neurons. Are you familiar with the Word2Vec results?
Lex Fridman
(04:32:29)
Mm- hmm.
Chris Olah
(04:32:30)
You have king – man + woman = queen. Well, the reason you can do that kind of arithmetic is because you have a linear representation.
Lex Fridman
(04:32:38)
Can you actually explain that representation a little bit? So first off, the feature is a direction of activation.
Chris Olah
(04:32:44)
Yeah, exactly.
Lex Fridman
(04:32:45)
You can do it that way. Can you do the – men + women, that, the Word2Vec stuff? Can you explain what that is, that work?
Chris Olah
(04:32:45)
Yeah. So, there’s this very-
Lex Fridman
(04:32:51)
It’s such a simple, clean explanation of what we’re talking about.
Chris Olah
(04:32:56)
Exactly. Yeah. So, there’s this very famous result, Word2Vec, by Tomas Mikolov et al, and there’s been tons of follow-up work exploring this. So, sometimes we create these word embeddings where we map every word to a vector. I mean, that in itself, by the way, is kind of a crazy thing if you haven’t thought about it before, right?
Lex Fridman
(04:33:15)
Mm-hmm.
Chris Olah
(04:33:20)
If you just learned about vectors in physics class, and I’m like, “Oh, I’m going to actually turn every word in the dictionary into a vector,” that’s kind of a crazy idea. Okay. But you could imagine all kinds of ways in which you might map words to vectors. But it seems like when we train neural networks, they like to go and map words to vectors such that there’s sort of linear structure in a particular sense, which is that directions have meaning. So, for instance, there will be some direction that seems to sort of correspond to gender, and male words will be far in one direction, and female words will be in another direction.

(04:33:59)
And the linear representation hypothesis is, you could think of it roughly as saying that that’s actually the fundamental thing that’s going on, that everything is just different directions have meanings, and adding different direction vectors together can represent concepts. And the Mikolov paper took that idea seriously, and one consequence of it is that you can do this game of playing arithmetic with words. So, you can do king and you can subtract off the word man and add the word woman. And so you’re sort of going and trying to switch the gender. And indeed, if you do that, the result will sort of be close to the word queen. And you can do other things like you can do sushi – Japan + Italy and get pizza, or different things like this, right?

(04:34:44)
So this is, in some sense, the core of the linear representation hypothesis. You can describe it just as a purely abstract thing about vector spaces. You can describe it as a statement about the activations of neurons, but it’s really about this property of directions having meaning. And in some ways, it’s even a little subtler than… It’s really, I think, mostly about this property of being able to add things together, that you can independently modify, say gender and royalty, or cuisine type, or country, and the concept of food by adding them.
Lex Fridman
(04:35:18)
Do you think the linear hypothesis holds-
Chris Olah
(04:35:20)
Yes.
Lex Fridman
(04:35:20)
… that carries scales?
Chris Olah
(04:35:24)
So far, I think everything I have seen is consistent with this hypothesis, and it doesn’t have to be that way, right? You can write down neural networks where you write weights such that they don’t have linear representations, where the right way to understand them is not in terms of linear representations. But I think every natural neural network I’ve seen has this property. There’s been one paper recently that there’s been some sort of pushing around the edge. So, I think there’s been some work recently studying multidimensional features where rather than a single direction, it’s more like a manifold of directions. This, to me, still seems like a linear representation.

(04:36:01)
And then there’s been some other papers suggesting that maybe in very small models you get non-linear representations. I think that the jury’s still out on that. But I think everything that we’ve seen so far has been consistent with the linear representation hypothesis, and that’s wild. It doesn’t have to be that way. And yet I think that there’s a lot of evidence that certainly at least this is very, very widespread, and so far the evidence is consistent with that. And I think one thing you might say is you might say, “Well, Christopher, that’s a lot to go and to ride on. If we don’t know for sure this is true, and you’re investing it in neural networks as though it is true, isn’t that dangerous?”

(04:36:43)
But I think, actually, there’s a virtue in taking hypotheses seriously and pushing them as far as they can go. So, it might be that someday we discover something that isn’t consistent with a linear representation hypothesis, but science is full of hypotheses and theories that were wrong, and we learned a lot by working under them as an assumption and then going and pushing them as far as we can. I guess this is the heart of what Kuhn would call normal science. I don’t know. If you want, we can talk a lot about-
Lex Fridman
(04:37:14)
Kuhn.
Chris Olah
(04:37:14)
… philosophy of science and-
Lex Fridman
(04:37:16)
That leads to the paradigm shift. So yeah, I love it, taking the hypothesis seriously, and take it to a natural conclusion.
Chris Olah
(04:37:22)
Yeah.
Lex Fridman
(04:37:23)
Same with the scaling hypothesis. Same-
Chris Olah
(04:37:25)
Exactly. Exactly. And-
Lex Fridman
(04:37:26)
I love it.
Chris Olah
(04:37:27)
One of my colleagues, Tom Henighan, who is a former physicist, made this really nice analogy to me of caloric theory where once upon a time, we thought that heat was actually this thing called caloric. And the reason hot objects would warm up cool objects is the caloric is flowing through them. And because we’re so used to thinking about heat in terms of the modern theory, that seems kind of silly. But it’s actually very hard to construct an experiment that disproves the caloric hypothesis. And you can actually do a lot of really useful work believing in caloric. For example, it turns out that the original combustion engines were developed by people who believed in the caloric theory. So, I think there’s a virtue in taking hypotheses seriously even when they might be wrong.
Lex Fridman
(04:38:17)
Yeah, there’s a deep philosophical truth to that. That’s kind of how I feel about space travel, like colonizing Mars. There’s a lot of people that criticize that. I think if you just assume we have to colonize Mars in order to have a backup for human civilization, even if that’s not true, that’s going to produce some interesting engineering and even scientific breakthroughs, I think.
Chris Olah
(04:38:39)
Yeah. Actually, this is another thing that I think is really interesting. So, there’s a way in which I think it can be really useful for society to have people almost irrationally dedicated to investigating particular hypotheses because, well, it takes a lot to maintain scientific morale and really push on something when most scientific hypotheses end up being wrong. A lot of science doesn’t work out, and yet it’s very useful to… There’s a joke about Geoff Hinton, which is that Geoff Hinton has discovered how the brain works every year for the last 50 years. But I say that with really deep respect because, in fact, actually, that led to him doing some really great work.
Lex Fridman
(04:39:29)
Yeah, he won the Nobel Prize now. Who’s laughing now?
Chris Olah
(04:39:32)
Exactly. Exactly. Exactly. I think one wants to be able to pop up and recognize the appropriate level of confidence. But I think there’s also a lot of value in just being like, “I’m going to essentially assume, I’m going to condition on this problem being possible or this being broadly the right approach. And I’m just going to go and assume that for a while and go and work within that, and push really hard on it.” And if society has lots of people doing that for different things, that’s actually really useful in terms of going and-
Chris Olah
(04:40:00)
… things that’s actually really useful in terms of going and either really ruling things out. We can be like, “Well, that didn’t work and we know that somebody tried hard.” Or going and getting to something that does teach us something about the world.

Superposition

Lex Fridman
(04:40:17)
So another interesting hypothesis is the super superposition hypothesis. Can you describe what superposition is?
Chris Olah
(04:40:22)
Yeah. So earlier we were talking about word defect, right? And we were talking about how maybe you have one direction that corresponds to gender and maybe another that corresponds to royalty and another one that corresponds to Italy and another one that corresponds to food and all of these things. Well, oftentimes maybe these word embeddings, they might be 500 dimensions, a thousand dimensions. And so if you believe that all of those directions were orthogonal, then you could only have 500 concepts. And I love pizza. But if I was going to go and give the 500 most important concepts in the English language, probably Italy wouldn’t be… it’s not obvious, at least that Italy would be one of them, right? Because you have to have things like plural and singular and verb and noun and adjective. And there’s a lot of things we have to get to before we get to Italy and Japan, and there’s a lot of countries in the world.

(04:41:18)
And so how might it be that models could simultaneously have the linear representation hypothesis be true and also represent more things than they have directions? So what does that mean? Well, okay, so if linear representation hypothesis is true, something interesting has to be going on. Now, I’ll tell you one more interesting thing before we go, and we do that, which is earlier we were talking about all these polysemantic neurons, these neurons that when we were looking at inception V1, these nice neurons that the car detector and the curve detector and so on that respond to lots of very coherent things. But it’s lots of neurons that respond to a bunch of unrelated things. And that’s also an interesting phenomenon. And it turns out as well that even these neurons that are really, really clean, if you look at the weak activations, so if you look at the activations where it’s activating 5% of the maximum activation, it’s really not the core thing that it’s expecting.

(04:42:14)
So if you look at a curve detector for instance, and you look at the places where it’s 5% active, you could interpret it just as noise or it could be that it’s doing something else there. Okay? So how could that be? Well, there’s this amazing thing in mathematics called compressed sensing, and it’s actually this very surprising fact where if you have a high dimensional space and you project it into a low dimensional space, ordinarily you can’t go and sort of un-projected and get back your high dimensional vector, you threw information away. This is like you can’t invert a rectangular matrix. You can only invert square matrices. But it turns out that that’s actually not quite true. If I tell you that the high-dimensional vector was sparse, so it’s mostly zeros, then it turns out that you can often go and find back the high-dimensional vector with very high probability.

(04:43:12)
So that’s a surprising fact, right? It says that you can have this high-dimensional vector space, and as long as things are sparse, you can project it down, you can have a lower-dimensional projection of it, and that works. So the superstition hypothesis is saying that that’s what’s going on in neural networks, for instance, that’s what’s going on in word embeddings. The word embeddings are able to simultaneously have directions be the meaningful thing, and by exploiting the fact that they’re operating on a fairly high-dimensional space, they’re actually… and the fact that these concepts are sparse, you usually aren’t talking about Japan and Italy at the same time. Most of those concepts, in most instances, Japan and Italy are both zero. They’re not present at all. And if that’s true, then you can go and have it be the case that you can have many more of these sort of directions that are meaningful, these features than you have dimensions.

(04:44:04)
And similarly, when we’re talking about neurons, you can have many more concepts than you have neurons. So that’s at a high level, the superstition hypothesis. Now it has this even wilder implication, which is to go and say that neural networks, it may not just be the case that the representations are like this, but the computation may also be like this. The connections between all of them. And so in some sense, neural networks may be shadows of much larger sparser neural networks. And what we see are these projections. And the strongest version of superstition hypothesis would be to take that really seriously and sort of say there actually is in some sense this upstairs model where the neurons are really sparse and all interpleural, and the weights between them are these really sparse circuits. And that’s what we’re studying. And the thing that we’re observing is the shadow of evidence. We need to find the original object.
Lex Fridman
(04:45:03)
And the process of learning is trying to construct a compression of the upstairs model that doesn’t lose too much information in the projection.
Chris Olah
(04:45:11)
Yeah, it’s finding how to fit it efficiently or something like this. The gradient descent is doing this and in fact, so this sort of says that gradient descent, it could just represent a dense neural network, but it sort of says that gradient descent is implicitly searching over the space of extremely sparse models that could be projected into this low-dimensional space. And this large body of work of people going and trying to study sparse neural networks where you go and you have… you could design neural networks where the edges are sparse and the activations are sparse.

(04:45:38)
And my sense is that work has generally, it feels very principled, it makes so much sense, and yet that work hasn’t really panned out that well, is my impression broadly. And I think that a potential answer for that is that actually the neural network is already sparse in some sense. You were trying to go and do this. Gradient descent was actually behind the scenes going and searching more efficiently than you could through the space of sparse models and going and learning whatever sparse model was most efficient. And then figuring out how to fold it down nicely to go and run conveniently on your GPU, which does as nice dense matrix multiplies. And that you just can’t beat that.
Lex Fridman
(04:46:16)
How many concepts do you think can be shoved into a neural network?
Chris Olah
(04:46:20)
Depends on how sparse they are. So there’s probably an upper bound from the number of parameters because you still have to have print weights that go and connect them together. So that’s one upper bound. There are in fact all these lovely results from compressed sensing and the Johnson-Lindenstrauss lemma and things like this that they basically tell you that if you have a vector space and you want to have almost orthogonal vectors, which is sort of probably the thing that you want here. So you’re going to say, “Well, I’m going to give up on having my concepts, my features be strictly orthogonal, but I’d like them to not interfere that much. I’m going to have to ask them to be almost orthogonal.”

(04:46:56)
Then this would say that it’s actually for, once you set a threshold for what you’re willing to accept in terms of how much cosine similarity there is, that’s actually exponential in the number of neurons that you have. So at some point, that’s not going to even be the limiting factor, but there’s some beautiful results there. And in fact, it’s probably even better than that in some sense because that’s sort of for saying that any random set of features could be active. But in fact the features have sort of a correlational structure where some features are more likely to co-occur and other ones are less likely to co-occur. And so neural networks, my guest would be, could do very well in terms of going and packing things to the point that’s probably not the limiting factor.
Lex Fridman
(04:47:37)
How does the problem of polysemanticity enter the picture here?
Chris Olah
(04:47:41)
Polysemanticity is this phenomenon we observe where you look at many neurons and the neuron doesn’t just sort of represent one concept, it’s not a clean feature. It responds to a bunch of unrelated things. And superstition you can think of as being a hypothesis that explains the observation of polysemanticity. So polysemanticity is this observed phenomenon and superstition is a hypothesis that would explain it along with some other things.
Lex Fridman
(04:48:05)
So that makes Mechinterp more difficult.
Chris Olah
(04:48:08)
Right. So if you’re trying to understand things in terms of individual neurons and you have polysemantic neurons, you’re in an awful lot of trouble. The easiest answer is like, “Okay, well you’re looking at the neurons, you’re trying to understand them. This one responds for a lot of things. It doesn’t have a nice meaning. Okay, that’s bad.” Another thing you could ask is ultimately we want to understand the weights. And if you have two polysemantic neurons and each one responds to three things and then the other neuron responds to three things and you have a wait between them, what does that mean? Does it mean that all three, there’s these nine interactions going on?

(04:48:40)
It’s a very weird thing, but there’s also a deeper reason, which is related to the fact that neural networks operate on really high dimensional spaces. So I said that our goal was to understand neural networks and understand the mechanisms. And one thing you might say is, “Well, it’s just a mathematical function. Why not just look at it, right?” One of the earliest projects I did studied these neural networks that mapped two-dimensional spaces to two-dimensional spaces, and you can sort of interpret them in this beautiful way is like bending manifolds. Why can’t we do that? Well, as you have a higher dimensional space, the volume of that space in some sense is exponential in the number of inputs you have. And so you can’t just go and visualize it.

(04:49:19)
So we somehow need to break that apart. We need to somehow break that exponential space into a bunch of things, some non-exponential number of things that we can reason about independently. And the independence is crucial because it’s the independence that allows you to not have to think about all the exponential combinations of things. And things being monosomatic, things only having one meaning, things having a meaning, that is the key thing that allows you to think about them independently. And so I think if you want the deepest reason why we want to have interpretable monosomatic features, I think that’s really the deep reason.
Lex Fridman
(04:49:58)
And so the goal here as your recent work has been aiming at is how do we extract the monosomatic features from a neural net that has polysemantic features and all this mess.
Chris Olah
(04:50:10)
Yes, we observe these polysemantic neurons, we hypothesize that’s what’s going on is superposition. And if superposition is what’s going on, there’s actually a sort of well-established technique that is sort of the principled thing to do, which is dictionary learning. And it turns out if you do dictionary learning in particular, if you do sort of a nice efficient way that in some sense sort of nicely regularizes that as well called a sparse auto encoder. If you train a sparse auto encoder, these beautiful interpretable features start to just fall out where there weren’t any beforehand. So that’s not a thing that you would necessarily predict, but it turns out that works very, very well. To me, that seems like some non-trivial validation of linear representations and superposition.
Lex Fridman
(04:50:51)
So with dictionary learning, you’re not looking for particular kind of categories. You don’t know what they are, they just emerge.
Chris Olah
(04:50:57)
Exactly. And this gets back to our earlier point when we’re not making assumptions. Gradient descent is smarter than us, so we’re not making assumptions about what’s there. I mean, one certainly could do that, right? One could assume that there’s a PHP feature and go and search for it, but we’re not doing that. We’re saying we don’t know what’s going to be there. Instead, we’re just going to go and let the sparse auto encoder discover the things that are there.

Monosemanticity

Lex Fridman
(04:51:16)
So can you talk toward monosematicity paper from October last year? I heard a lot of nice breakthrough results.
Chris Olah
(04:51:24)
That’s very kind of you to describe it that way. Yeah, I mean, this was our first real success using sparse autoencoders. So we took a one-layer model, and it turns out if you go and you do dictionary learning on it, you find all these really nice interpretable features. So the Arabic feature, the Hebrew feature, the Base64 features were some examples that we studied in a lot of depth and really showed that they were what we thought they were. Turns out if you train a model twice as well and train two different models and do dictionary learning, you find analogous features in both of them. So that’s fun. You find all kinds of different features. So that was really just showing that this works. And I should mention that there was this Cunningham and all that had very similar results around the same time.
Lex Fridman
(04:52:08)
There’s something fun about doing these kinds of small scale experiments and finding that it’s actually working.
Chris Olah
(04:52:14)
Yeah, well, and that there’s so much structure here. So maybe stepping back, for a while I thought that maybe all this mechanistic interpolate work, the end result was going to be that I would have an explanation for why it was sort of very hard and not going to be tractable. We’d be like, “Well, there’s this problem with supersession and it turns out supersession is really hard and we’re kind of screwed, but that’s not what happened. In fact, a very natural simple technique just works. And so then that’s actually a very good situation. I think this is a sort of hard research problem and it’s got a lot of research risk and it might still very well fail, but I think that some very significant amount of research risk was put behind us when that started to work.
Lex Fridman
(04:52:57)
Can you describe what kind of features can be extracted in this way?
Chris Olah
(04:53:02)
Well, so it depends on the model that you’re studying. So the larger the model, the more sophisticated they’re going to be. And we’ll probably talk about follow up work in a minute. But in these one layer models, so some very common things I think were languages, both programming languages and natural languages. There were a lot of features that were specific words in specific contexts, so the. And I think really the way to think about this is that the is likely about to be followed by a noun. So you could think of this as the feature, but you could also think of this as protecting a specific noun feature. And there would be these features that would fire for the in the context of say, a legal document or a mathematical document or something like this. And so maybe in the context of math, you’re like the, and then predict vector or matrix, all these mathematical words, whereas in other contexts you would predict other things, that was common.
Lex Fridman
(04:53:54)
And basically we need clever humans to assign labels to what we’re seeing.
Chris Olah
(04:54:00)
Yes. So the only thing this is doing is that sort of unfolding things for you. So if everything was sort of folded over top of it, serialization folded everything on top of itself and you can’t really see it, this is unfolding it. But now you still have a very complex thing to try to understand. So then you have to do a bunch of work understanding what these are, and some are really subtle. There’s some really cool things even in this one layer model about Unicode, where of course some languages are in Unicode, and the tokenizer won’t necessarily have a dedicated token for every Unicode character. So instead, what you’ll have is you’ll have these patterns of alternating token or alternating tokens that each represent half of a Unicode character.

(04:54:40)
And you have a different feature that goes and activates on the opposing ones to be like, “Okay, I just finished a character, go and predict next prefix. Then okay, I’m on the prefix, predict a reasonable suffix.” And you have to alternate back and forth. So these swap layer models are really interesting. And I mean there’s another thing that you might think, “Okay, there would just be one Base64 feature, but it turns out there’s actually a bunch of Base64 features because you can have English text encoded as Base64, and that has a very different distribution of Base64 tokens than regular. And there’s some things about tokenization as well that it can exploit. And I don’t know, there’s all kinds of fun stuff.
Lex Fridman
(04:55:21)
How difficult is the task of assigning labels to what’s going on? Can this be automated by AI?
Chris Olah
(04:55:28)
Well, I think it depends on the feature, and it also depends on how much you trust your AI. So there’s a lot of work doing automated interoperability. I think that’s a really exciting direction, and we do a fair amount of automated interoperability and have Claude go and label our features.
Lex Fridman
(04:55:42)
Is there some fun moments where it’s totally right or it’s totally wrong?
Chris Olah
(04:55:47)
Yeah, well, I think it’s very common that it says something very general, which is true in some sense, but not really picking up on the specific of what’s going on. So I think that’s a pretty common situation. You don’t know that I have a particularly amusing one.
Lex Fridman
(04:56:06)
That’s interesting. That little gap between it is true, but it doesn’t quite get to the deep nuance of a thing. That’s a general challenge, it’s already an incredible caution that can say a true thing, but it’s missing the depth sometimes. And in this context, it’s like the ARC challenge, the sort of IQ type of tests. It feels like figuring out what a feature represents is a little puzzle you have to solve.
Chris Olah
(04:56:35)
Yeah. And I think that sometimes they’re easier and sometimes they’re harder as well. Yeah, I think that’s tricky. There’s another thing which I don’t know, maybe in some ways this is my aesthetic coming in, but I’ll try to give you a rationalization. I’m actually a little suspicious of automated interoperability, and I think that partly just that I want humans to understand neural networks. And if the neural network is understanding it for me, I don’t quite like that, but I do have a bit of… In some ways, I’m sort of like the mathematicians who are like, “If there’s a computer automated proof, it doesn’t count.” They won’t understand it. But I do also think that there is this kind of reflections on trusting trust type issue where there’s this famous talk about when you’re writing a computer program, you have to trust your compiler.

(04:57:20)
And if there was malware in your compiler, then it could go and inject malware into the next compiler and you’d be kind of in trouble, right? Well, if you’re using neural networks to go and verify that your neural networks are safe, the hypothesis that you’re trusting for is like, “Okay, well the neural network maybe isn’t safe and you have to worry about is there some way that it could be screwing with you? I think that’s not a big concern now, but I do wonder in the long run, if we have to use really powerful AI systems to go and audit our AI systems, is that actually something we can trust? But maybe I’m just rationalizing because I just want us to have to get to a point where humans understand everything.

Scaling Monosemanticity

Lex Fridman
(04:57:58)
Yeah, I mean that’s hilarious, especially as we talk about AI safety and looking for features that would be relevant to AI safety, like deception and so on. So let’s talk about the Scaling Monosematicity paper in May 2024. Okay. So what did it take to scale this, to apply to Claude 3 Sonnet?
Chris Olah
(04:58:18)
Well, a lot of GPUs.
Lex Fridman
(04:58:19)
A lot more GPUs. Got it.
Chris Olah
(04:58:21)
But one of my teammates, Tom Henighan was involved in the original scaling laws work, and something that he was sort of interested in from very early on is are there scaling laws for interoperability? And so something he immediately did when this work started to succeed, and we started to have sparse autoencoders work, was he became very interested in what are the scaling laws for making sparse autoencoders larger and how does that relate to making the base model larger? And so it turns out this works really well and you can use it to sort of project, if you train a sparse autoencoder of a given size, how many tokens should you train on and so on. This was actually a very big help to us in scaling up this work, and made it a lot easier for us to go and train really large sparse autoencoders where it’s not training the big models, but it’s starting to get to a point where it’s actually expensive to go and train the really big ones.
Lex Fridman
(04:59:21)
I mean, you have to do all this stuff of splitting it across large CPUs-
Chris Olah
(04:59:26)
Oh, yeah. No, I mean there’s a huge engineering challenge here too, right? Yeah. So there’s a scientific question of how do you scale things effectively? And then there’s an enormous amount of engineering to go and scale this up. You have to chart it, you have to think very carefully about a lot of things. I’m lucky to work with a bunch of great engineers because I am definitely not a great engineer.
Lex Fridman
(04:59:43)
And the infrastructure especially. Yeah, for sure. So it turns out TLDR, it worked.
Chris Olah
(04:59:49)
It worked. Yeah. And I think this is important because you could have imagined a world where you set after towards monospecificity. Chris, this is great. It works on a one-layer model, but one-layer models are really idiosyncratic. Maybe that’s just something, maybe the linear representation hypothesis and superposition hypothesis is the right way to understand a one-layer model, but it’s not the right way to understand larger models. So I think, I mean, first of all, the Cunningham and all paper sort of cut through that a little bit and sort of suggested that this wasn’t the case.

(05:00:18)
But Scaling Monospecificity sort of I think was significant evidence that even for very large models, and we did it on Claude 3 Sonnet, which at that point was one of our production models. Even these models seemed to be substantially explained, at least by linear features. And doing dictionary learning on them works, and as you learn more features, you go and you explain more and more. So that’s, I think, quite a promising sign. And you find now really fascinating abstract features, and the features are also multimodal. They respond to images and texts for the same concept, which is fun.
Lex Fridman
(05:00:54)
Yeah. Can you explain that? I mean, backdoor, there’s just a lot of examples that you can-
Chris Olah
(05:01:01)
Yeah. So maybe let’s start with that. One example to start, which is we found some features around security vulnerabilities and backdoorsing code. So turns out those are actually two different features. So there’s a security vulnerability feature, and if you force it active, Claude it will start to go and write security vulnerabilities like buffer overflows into code. And also fires for all kinds of things, some of the top data set examples where things like dash dash, disable SSL or something like this, which are sort of obviously really insecure.
Lex Fridman
(05:01:34)
So at this point, maybe it’s just because the examples are presented that way, it’s kind of surface a little bit more obvious examples. I guess the idea is that down the line it might be able to detect more nuance like deception or bugs or that kind of stuff.
Chris Olah
(05:01:50)
Yeah. Well, maybe I want to distinguish two things. So one is the complexity of the feature or the concept, right? And the other is the nuance of how subtle the examples we’re looking at, right?. So when we show the top data set examples, those are the most extreme examples that cause that feature to activate. And so it doesn’t mean that it doesn’t fire for more subtle things. So that insecure code feature, the stuff that it fires most strongly for are these really obvious disable the security type things, but it also fires for buffer overflows and more subtle security vulnerabilities in code. These features are all multimodal. You could ask it like, “What images activate this feature?” And it turns out that the security vulnerability feature activates for images of people clicking on Chrome to go past this website, the SSL certificate might be wrong or something like this.

(05:02:55)
Another thing that’s very entertaining is there’s backdoors in code feature, like you activate it, it goes and Claude writes a backdoor that will go and dump your data to port or something. But you can ask, “Okay, what images activate the backdoor feature?” It was devices with hidden cameras in them. So there’s a whole apparently genre of people going and selling devices that look innocuous that have hidden cameras, and they have ads that has this hidden camera in it? And I guess that is the physical version of a backdoor. And so it sort of shows you how abstract these concepts are, and I just thought that was… I’m sort of sad that there’s a whole market of people selling devices like that, but I was kind of delighted that that was the thing that it came up with as the top image examples for the feature.
Lex Fridman
(05:03:36)
Yeah, it’s nice. It’s multimodal. It’s multi almost context. It’s broad, strong definition of a singular concept. It’s nice.
Chris Olah
(05:03:44)
Yeah.
Lex Fridman
(05:03:45)
To me, one of the really interesting features, especially for AI safety, is deception and lying. And the possibility that these kinds of methods could detect lying in a model, especially get smarter and smarter and smarter. Presumably that’s a big threat over super intelligent model that it can deceive the people operating it as to its intentions or any of that kind of stuff. So what have you learned from detecting lying inside models?
Chris Olah
(05:04:13)
Yeah, so I think we’re in some ways in early days for that, we find quite a few features related to deception and lying. There’s one feature where it fires for people lying and being deceptive, and you force it active and Claude starts lying to you. So we have a deception feature. I mean, there’s all kinds of other features about withholding information and not answering questions, features about power seeking and coups and stuff like that. So there’s a lot of features that are kind of related to spooky things, and if you force them active Claude will behave in ways that are… they’re not the kinds of behaviors you want.
Lex Fridman
(05:04:50)
What are possible next exciting directions to you in the space of Mechinterp?
Chris Olah
(05:04:56)
Well, there’s a lot of things. So for one thing, I would really like to get to a point where we have shortcuts where we can really understand not just the features, but then use that to understand the computation of models. That relief for me is the ultimate goal of this. And there’s been some work, we put out a few things. There’s a paper from Sam Marks that does some stuff like this, and there’s been, I’d say some work around the edges here. But I think there’s a lot more to do, and I think that will be a very exciting thing that’s related to a challenge we call interference weights. Where due to superstition, if you just sort of naively look at what features are connected together, there may be some weights that don’t exist in the upstairs model, but are just sort of artifacts of superstition. So that’s a technical challenge Related to that, I think another exciting direction is just you might think of sparse autoencoders as being kind of like a telescope. They allow us to look out and see all these features that are out there, and as we build better and better sparse autoencoders, we better and better at dictionary learning, we see more and more stars. And we zoom in on smaller and smaller stars. There’s a lot of evidence that we’re only still seeing a very small fraction of the stars. There’s a lot of matter in our neural network universe that we can’t observe yet. And it may be that we’ll never be able to have fine enough instruments to observe it, and maybe some of it just isn’t possible, isn’t computationally tractable to observe. So it’s sort of a kind of dark matter in not in maybe the sense of modern astronomy of early astronomy when we didn’t know what this unexplained matter is. And so I think a lot about that dark matter and whether we’ll ever observe it and what that means for safety if we can’t observe it, if some significant fraction of neural networks are not accessible to us.

Macroscopic behavior of neural networks


(05:06:56)
Another question that I think a lot about is at the end of the day, mechanistic interpolation is this very microscopic approach to interpolation. It’s trying to understand things in a very fine-grained way, but a lot of the questions we care about are very macroscopic. We care about these questions about neural network behavior, and I think that’s the thing that I care most about. But there’s lots of other sort of larger-scale questions you might care about. And the nice thing about having a very microscopic approach is it’s maybe easier to ask, is this true? But the downside is its much further from the things we care about. And so we now have this ladder to climb. And I think there’s a question of will we be able to find, are there larger-scale abstractions that we can use to understand neural networks that can we get up from this very microscopic approach?
Lex Fridman
(05:07:48)
Yeah. You’ve written about this as kind of organs question.
Chris Olah
(05:07:52)
Yeah, exactly.
Lex Fridman
(05:07:53)
If we think of interpretability as a kind of anatomy of neural networks, most of the circus threads involve studying tiny little veins looking at the small scale and individual neurons and how they connect. However, there are many natural questions that the small-scale approach doesn’t address. In contrast, the most prominent abstractions and biological anatomy involve larger-scale structures like individual organs, like the heart or entire organ systems like the respiratory system. And so we wonder, is there a respiratory system or heart or brain region of an artificial neural network?
Chris Olah
(05:08:29)
Yeah, exactly. And I mean, if you think about science, right? A lot of scientific fields investigate things at many level of abstraction. In biology, you have molecular biology studying proteins and molecules and so on, and they have cellular biology, and then you have histology studying tissues, and then you have anatomy, and then you have zoology, and then you have ecology. And so you have many, many levels of abstraction or physics, maybe you have a physics of individual particles, and then statistical physics gives you thermodynamics and things like this. And so you often have different levels of abstraction.

(05:09:01)
And I think that right now we have mechanistic interpretability, if it succeeds, is sort of like a microbiology of neural networks, but we want something more like anatomy. And a question you might ask is, “Why can’t you just go there directly?” And I think the answer is superstition, at least in significant part. It’s that it’s actually very hard to see this macroscopic structure without first sort of breaking down the microscopic structure in the right way and then studying how it connects together. But I’m hopeful that there is going to be something much larger than features and circuits and that we’re going to be able to have a story that involves much bigger things. And then you can sort of study in detail the parts you care about.
Lex Fridman
(05:09:43)
I suppose, in your biology, like a psychologist or a psychiatrist of a neural network.
Chris Olah
(05:09:48)
And I think that the beautiful thing would be if we could go and rather than having disparate fields for those two things, if you could build a bridge between them, such that you could go and have all of your higher level distractions be grounded very firmly in this very solid, more rigorous, ideally foundation.
Lex Fridman
(05:10:11)
What do you think is the difference between the human brain, the biological neural network and the artificial neural network?
Chris Olah
(05:10:17)
Well, the neuroscientists have a much harder job than us. Sometimes I just count my blessings by how much easier my job is than the neuroscientists. So we can record from all the neurons. We can do that on arbitrary amounts of data. The neurons don’t change while you’re doing that, by the way. You can go and ablate neurons, you can edit the connections and so on, and then you can undo those changes. That’s pretty great. You can intervene on any neuron and force it active and see what happens. You know which neurons are connected to everything. Neuroscientists want to get the connectome, we have the connectome and we have it for much bigger than C. elegans. And then not only do we have the connectome, we know which neurons excite or inhibit each other, right? It’s not just that we know the binary mask, we know the weights. We can take gradients, we know computationally what each neuron does. I don’t know. The list goes on and on. We just have so many advantages over neuroscientists. And then despite having all those advantages, it’s really hard. And so one thing I do sometimes think is like, “Gosh, if it’s this hard for us, it seems impossible under the constraints of neuroscience or near impossible.” I don’t know. Maybe part of me is I’ve got a few neuroscientists on my team, maybe I’m sort of like, “Ah, the neuroscientists. Maybe some of them would like to have an easier problem that’s still very hard, and they could come and work on neural networks. And then after we figure out things in sort of the easy little pond of trying to understand neural networks, which is still very hard, then we could go back to biological neuroscience.”

Beauty of neural networks

Lex Fridman
(05:11:51)
I love what you’ve written about the goal of MechInterp research as two goals, safety and beauty. So can you talk about the beauty side of things?
Chris Olah
(05:11:59)
Yeah. So there’s this funny thing where I think some people are kind of disappointed by neural networks, I think, where they’re like, “Ah, neural networks, it’s just these simple rules. Then you just do a bunch of engineering to scale it up and it works really well. And where’s the complex ideas? This isn’t a very nice, beautiful scientific result.” And I sometimes think when people say that, I picture them being like, “Evolution is so boring. It’s just a bunch of simple rules. And you run evolution for a long time and you get biology. What a sucky way for biology to have turned out. Where’s the complex rules?” But the beauty is that the simplicity generates complexity.

(05:12:41)
Biology has these simple rules and it gives rise to all the life and ecosystems that we see around us. All the beauty of nature, that all just comes from evolution and from something very simple in evolution. And similarly, I think that neural networks build, create enormous complexity and beauty inside and structure inside themselves that people generally don’t look at and don’t try to understand because it’s hard to understand. But I think that there is an incredibly rich structure to be discovered inside neural networks, a lot of very deep beauty if we’re just willing to take the time to go and see it and understand it.
Lex Fridman
(05:13:20)
Yeah, I love Mechinterp. The feeling like we are understanding or getting glimpses of understanding the magic that’s going on inside is really wonderful.
Chris Olah
(05:13:30)
It feels to me like one of the questions that’s just calling out to be asked, and I’m sort of, I mean a lot of people are thinking about this, but I’m often surprised that not more are is how is it that we don’t know how to create computer systems that can do these things? And yet we have these amazing systems that we don’t know how to directly create computer programs that can do these things, but these neural networks can do all these amazing things. And it just feels like that is obviously the question that is calling out to be answered. If you have any degree of curiosity, it’s like, “How is it that humanity now has these artifacts that can do these things that we don’t know how to do?”
Lex Fridman
(05:14:06)
Yeah. I love the image of the circus reaching towards the light of the objective function.
Chris Olah
(05:14:11)
Yeah, it’s this organic thing that we’ve grown and we have no idea what we’ve grown.
Lex Fridman
(05:14:15)
Well, thank you for working on safety, and thank you for appreciating the beauty of the things you discover. And thank you for talking today, Chris, this was wonderful.
Chris Olah
(05:14:23)
Thank you for taking the time to chat as well.
Lex Fridman
(05:14:26)
Thanks for listening to this conversation with Chris Ola and before that, with Dario Amodei and Amanda Askell. To support this podcast, please check out our sponsors in the description. And now let me leave you with some words from Alan Watts. “The only way to make sense out of change is to plunge into it, move with it, and join the dance.” Thank you for listening and hope to see you next time.

Transcript for Rick Spence: CIA, KGB, Illuminati, Secret Societies, Cults & Conspiracies | Lex Fridman Podcast #451

This is a transcript of Lex Fridman Podcast #451 with Rick Spence.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Introduction

Rick Spence
(00:00:00)
Most people, most of the time are polite, cooperative, and kind until they’re not.
Lex Fridman
(00:00:13)
The following is a conversation with Rick Spence, a historian specializing in the history of intelligence agencies, espionage, secret societies, conspiracies, the occult and military history. This is the Lex Fridman Podcast. To support it, please check out our sponsors in the description. And now dear friends, here’s Rick Spence.

KGB and CIA


(00:00:38)
You have written and lectured about serial killers, secret societies, cults and intelligence agencies. So we can basically begin at any of these fascinating topics, but let’s begin with intelligence agencies. Which has been the most powerful intelligence agency in history?
Rick Spence
(00:00:55)
The most powerful intelligence agency in history. It’s an interesting question. I’d say probably in terms of historical longevity and consistency of performance that the Russian Intelligence Services. Notice I didn’t say the KGB specifically, but the Russian Intelligence Services, going back to the Czarist period are consistently pretty good. Not infallible, none of them are. Of course, there’s a common Western way of looking at anything Russian. Very often, I think it’s still the case Russians are viewed in one or two ways. Either they are Bumbling idiots or they’re diabolically clever, no sort of middle ground. You can find both of those examples in this.

(00:01:49)
So what I mean by that is that if you’re looking at the modern SVR or FSB, which are just two different organizations that used to be part of the one big KGB or the KGB or its predecessors, the Checka, you’re really going back to the late 19th century and the Imperial Russian Intelligence Security Service, generally known as the Okhrana or Okhrana.

(00:02:17)
It’s really the Department of Police, the special Corps of Gendarmes. Their primary job was protecting the imperial regime and protecting it against imperial or other interior enemies, Revolutionaries for the most part. They got very, very good at that by co-opting people within those movements, infiltrating and recruiting informers, [inaudible 00:02:41] provocateurs. In fact, they excelled at the [inaudible 00:02:45] provocateur.

(00:02:46)
Person who placed aside an organization to cause trouble, usually maneuver them into a position of leadership, and they provoke actions that can then allow you to crack down on that is many sort of lure or bring the target organization into any legal or open status that it can be more effectively suppressed. They were very good at that. So good that by the early 20th century in the years preceding the Russian Revolution in 1917, they had effectively infiltrated every radical party, Bolsheviks, Menchaviks, SRs, great and small, and placed people in positions of influence and leadership to the point that arguably that is, you can debate this, that I think in the whole, they could largely dictate what those parties did.

(00:03:42)
Nothing was discussed at any central committee meeting of any revolutionary group that the Okhrana wasn’t immediately aware of, and they often had people in positions to influence what those decisions were. Of course, that raises an interesting question, is that if they were that good and they had infiltrated and effectively controlled most of the opposition, then how did the regime get overthrown by revolutionaries? The answer to that is that it wasn’t overthrown by revolutionaries, it was overthrown by politicians. That would then take us into a detour into Russian history. But I’ll just leave it with this. If you look at 1917 and you look closely, this is one of the things I’d always tell my students is that there are two Russian revolutions in 1917. There’s the first one in March or February, depending on your calendar, that overthrows Nicholas II. Revolutionaries are really not involved with that.

(00:04:40)
Bolsheviks are nowhere to be seen. Trotsky and Lenin are nowhere to be seen. They have nothing to do with that. That has to do effectively with a political conspiracy within the Russian parliament, the Duma. To unseat and emperor, they thought was bungling the war and was essentially a loser to begin with. It was a coup d’etat, a parliamentary coup d’etat. The temporary or provisional government that that revolution put in power was the one overthrown by Lenin eight months later. That government was essentially one dominated by moderate socialists. It was a government that very quickly sort of turned to the left. The guy we associate with that is Alexander Kerensky. Alexander Kerensky was a Russian socialist, a politician. He was the quasi-dictator of that regime. He’s the person, not the Tsar, who’s overthrown by Lenin. So the revolutionaries then did not prove to be the fatal threat to the Tsarist regime.

(00:05:46)
It was the Tsarist political system itself that did that. What then transpired was that the Okhrana and its method, and many of its agents then immediately segued over into the new Soviet Security Service. So one of the first things that Lenin did in December of 1917, within a month of seizing power since the hold on power was tenuous at best, was that while you were going to need some kind of organization to infiltrate and suppress those pesky counter-revolutionaries and foreign imperialists and all of the other enemies that we have. So the extraordinary Commission to Combat Counter-revolution and sabotage the Cheka was formed. You put a veteran Bolshevik, Felix Dzerzhinsky at the head of that someone you could politically rely upon, but Dzerzhinsky built his organization essentially out of the Okhrana. There were all of these informers sitting around with nothing to do, and they were employed in the early twenties. The kind of rank-and-file of the Cheka might’ve been 80 to 90% former Imperial officials. Those were gradually decreased over time.

(00:07:02)
So why would they do that? Well, they were professionals. They also needed to eat and things were somewhat precarious. So if your job is to be an agent provocateur, if your job is to infiltrate targeted organizations and lead them astray, you do that for whoever pays you. That’s part of the professionalism, which goes in. Under the Soviets, the Soviet Intelligence Services are also very good at that. They’re very good at infiltrating people into opposing organizations. I guess the one example I would give to demonstrate that at the Cambridge five, the British traders from the Soviet standpoint, heroes who were recruited, most notably Kim Philby, Guy Burgess, Donald McClain, Anthony Blunt, and there may have been, well more than five, but that wasn’t bad out of just Cambridge.

(00:07:59)
Then placing those people in high positions, the ultimate goal, of course, is to get your people into positions of leadership and influence in the opposing intelligence service. So they did. Of course, it all fell apart and they ended up in …Philby ended up living the last part of his life in exile in Moscow, but they got their money’s worth out of him. You can also find this in KGB infiltration, the CIA, the FBI, the Aldrich Ames, Robert Hanson cases. Of course, we were infiltrating. By we, I mean the Americans in the West managed to infiltrate our moles as well. But if it came down, someone could dispute this. But I would think if you were going to come down to kind of like who had the most moles Super Bowl, probably the Soviets would come somewhat ahead of that.
Lex Fridman
(00:08:57)
So the scale of the infiltration, the number of people and the skill of it, is there a case to be made that the Okhrana and the Chaka orchestrated both the components of the Russian Revolution as you described them?
Rick Spence
(00:09:14)
Well, there’s an interesting question for me. There are all kinds of questions about this. One of the questions is whether or not Lenin was an Okhrana agent. Okay, I’ve just said heresy. I’ll do that quite often. I am a heretic and proud of it.
Lex Fridman
(00:09:31)
Great.
Rick Spence
(00:09:33)
Why would you possibly say that Lenin could have been an Okhrana agent? Well, let’s look what he managed to do. So you had, coming into the 20th century, nominally, a single Marxist movement, the Russian social Democratic Labor Party, and Bolsheviks and Mensheviks majority- ites and minority-ites are merely factions of that party. They always agreed that they were all Marxists. We all believe in dialectical materialism and the rise of were all socialists comrade. The difference was the tactical means by which one would attain this. What Lenin wanted was a militant small-scale Vanguard party. Wanted a revolution, wanted to seize power, seize control of the state.

(00:10:31)
Once you have the state, then you induce socialism from above. Whereas the majority of the people, the so-called Mensheviks, the minority-ites who are oddly-enough, the vast majority of the party, that’s one of the first things. How do you lose that argument? How does the minority get to grab the name? But Lenin did that. So what Lenin wanted was a conspiratorial party of committed revolutionaries that would plot and scheme and undermine and eventually seize control of the state and induce socialism from above. There were other Russian Marxists who thought that that sounded vaguely totalitarian and not really democratic and not even terribly socialist. They opposed that ineffectively from the beginning, outmaneuvered every step of the way. The Mensheviks are a case study in failure of a political organization. That too will be heresy to some people.

(00:11:38)
But look, they lost. So what Lenin managed to do starting around 1903, continuing under this, is he managed to divide, to take what had been a single Marxist party and split it into angry contending factions because he and his Bolsheviks run one side advocating a much more militant conspiratorial policy. The discombobulated Mensheviks were over on the other. And in between were a lot of people who really didn’t know where they stood on this. Sometimes they kind of agreed he seems to be making sense today. No, no, I don’t think he’s making sense in that day. But he managed to completely disunify this organization. Now, who could possibly have seen benefit in that the Okhrana. Now, whether or not they put him up to it, whether or not in some way they helped move him into a position of leadership or encouraged it or encouraged it through people around him, whether he was a witting or unwitting agent of the Tsar’s Secret Police, he certainly accomplished exactly what it was that they had wanted.

(00:12:52)
I find that suspicious. It’s one of those things that it’s so convenient in a way, is that I’m not necessarily sure that was an accident. There’s also this whole question to me as to what was going on within the Okhrana itself. Now, this is one of these questions we may come to later about how intelligence agencies interact or serve the governments to which they are theoretically subordinate. They do tend to acquire a great deal of influence and power. After all, their main job is to collect information. That information could be about all kinds of things, including people within the government structure itself.

(00:13:43)
They also know how to leverage that information in a way to get people to do what you want them to do. So an argument can be made, again, an argument, not a fact, merely an opinion, which is mostly what history is made out of opinions is that at some point between about 1900 and 1917, people within the Okhrana were playing their own game. That game took them in a direction, which meant that continued loyalty to the emperor, specifically to Nicholas II, was no longer part of that.

(00:14:23)
To me, in a way, it seems almost during the events of 1917, that one, you had an organization that was very effective that suddenly just becomes ineffective. It doesn’t really disappear. These things don’t go away because it will reappear as the O’Chacka basically fairly quickly. But it raises the question to me as to what degree there were people within the organization who allowed events to take the course they wished.

Okhrana, Cheka, NKVD

Lex Fridman
(00:14:55)
I always wonder how much deliberate planning there is within an organization like Okhrana or if there’s kind of a distributed intelligence that happens.
Rick Spence
(00:15:07)
Well, one of the key elements that any kind of intelligence organization or operation is compartmentalization need to know. So rarely do you have an occasion where everybody in an executive position are all brought into a big corporate meeting and we discuss all of the secret operations that are going on. No, no, you never do that. Only a very limited number of people should know about that. If you have a person who is a case officer, is controlling agency, he’s the only one that should know who those people are, possibly his immediate superiors. But no way do you want that to be common knowledge. So information within the organization itself is compartmentalized. So you don’t need everybody to be in on it. You don’t even need necessarily the people who are nominally at the top. Versus the Okhrana, the real boss of the Okhrana was the Imperial ministry of the Interior, the Minister of the Interior, in fact.

(00:16:07)
But the Minister of the Interior had no real effective control over this at all. To the point was that at one point early on, they actually organized the assassination of their own boss. They have their agents among the revolutionaries kill the Minister of the Interior. He’ll just replaced by another one. He’s an Imperial bureaucrat. He’s not really part of their organization. It’s like a director of an intelligence agency appointed by the president. Maybe he’s part of the organization, maybe he isn’t. Maybe he is not one of us. So you’ve got different levels, different compartments within it. Who’s actually running the show, if anyone is, I don’t know. That’s never supposed to be apparent.
Lex Fridman
(00:17:00)
Well, that’s a fascinating question. You could see this with NKVD. It’s obviously an extremely powerful organization that starts to eat itself, where everybody’s pointing fingers internally also as a way to gain more power. So the question is in organizations like that that are so-called compartmentalized, where’s the power? Where’s the center of power? Because you would think given that much power, some individual or a group of individuals will start accumulating that power. But it seems like that’s not always a trivial thing because if you get too powerful, the snake eats that person.
Rick Spence
(00:17:43)
Well, if we go back again to the founder of Soviet Secret Police, Felix Dzerzhinsky dies in 1926, keels over after giving a heated speech to a party meeting. Now, the common view, what you usually read, which was key for the time, is that clearly Stalin had him whacked because anytime someone died, it was almost always that. I think a lot of times he did. But in some cases, Stalin’s probably getting blamed for things that he didn’t actually do. Dzerchinsky wasn’t even opposed to Stalin. So it’s not clear why he … but Stalin died. Obviously, he was poisoned. Something happened. It was an unnatural death. Somebody goes in for an operation, it gets a little too much anesthesia. Stalin killed them. Somebody tips over in a canoe in upstate New York, Stalin killed them. There’s actually a case about that. So that itself can be kind of useful, where every time someone dies, they think you killed them.

(00:18:53)
That’s kind of an interesting method of intimidation in that regard. But the suspicion is nonetheless there, Dzerzhinsk was the grand inquisitor. He was seemingly firmly in control of the organization. Of course, maybe he wasn’t. My guess would be is that if Dzerzhinsky’s death was not natural causes, that he was probably eliminated by someone within his own organization. Then you look at the people who take over his immediate successor is Vyacheslav Menzhinsky who’s really not really a secret policeman, more a kind of intellectual dilettante. But if you look behind him, is the fellow Genrikh Yagoda, and Yagoda will really manage things from behind the scenes until Menzhinsky dies in 1930.

(00:19:52)
Then Yagoda will hold on until he’s the victim of the purges, I think in 37 or 38. Yagoda is ambitious, murderous, and if I was going to point the finger to anybody who possibly had Dzerzhinsky whacked, it would be him. For the purposes simply of advancement. The person to look out at any kind of corporate organization is your immediate subordinate, the person who could move into your job, because more than likely, that’s exactly what they’re planning to do.
Lex Fridman
(00:20:31)
Yeah, just one step away from the very top, somebody there will probably accumulate the most power. You mentioned that the various Russian intelligence agencies were good at creating agent provocateurs infiltrating the halls of power. What does it take to do that?
Rick Spence
(00:20:53)
Well, there’s an interesting little acronym called MICE, M-I-C-E. It’s generally used, and it’s just the way in which you would acquire. How do you get people to work for you? Well, M stands for money. You pay them. People are greedy. They want money. If you look at Aldrich Ames, he had a very, very expensive wife with expensive tastes. So he wanted money. I is for ideology. So during, particularly in the 1920s and the 1930s, the Soviets were very effective in exploiting communists, people who wanted to serve the great cause, even though that’s initially not really what they wanted to do. Because the idea was that if you recruit agents from among, let’s say, American communists, you compromise the party because exactly what your enemies are going to say is that all communists are Soviet spies. They’re all traitors in some way. So you would really want to keep those two things separate.

(00:21:55)
But ideology was just so convenient, and those people would just work for you so well. You could get them to do anything, betray their grandmother. They would go ahead and do that for the greater good. So ideology can be a motivation, and that can be someone who is a devoted Marxist-Leninist. It can also be someone who’s a disgruntled communist because there’s no anti-communist like an ex-communist.

(00:22:25)
Those who lose the faith can become very, very useful. For instance, if you look in the case of American intelligence, the people who essentially temporarily destroyed much of the KGB organization in the US post-World War II, where people like Whitaker Chambers, Louis Budenz, Elizabeth Bentley, all of those people had been Communist party members. They had all been part of the Red Faithful. They all, for one reason or another, became disillusioned and turned rat or patriot, whichever case you may want to put in that regard.
Lex Fridman
(00:23:12)
What does the C in the E stand for?
Rick Spence
(00:23:14)
The C is for coercion. That’s where you have to persuade someone to work for you. You have to pressure them. So usually you blackmail them. That could be they have a gambling habit. In the old days, it’s very often they were gay. Get them in a decision where they can be compromised and you can get them to do your bidding. Those people usually have a certain amount of control. Here’s an interesting example of how the Okhrana tended to handle this, and I think it’s still largely used. You’d round up a bunch of revolutionaries on some charge or another distributing revolutionary literature, running any illegal printing press. You bring a guy into the room and you say, okay, you’re going to work for us. Of course, we refuse to do so. They go, well, if you refuse, we’ll keep the rest of your comrades in jail for a while, maybe beat them with a rubber truncheon or so, and then we’re just going to let you go. We’re just going to put you back out on the street.

(00:24:17)
If you don’t work for us, we will spread the rumor through our agents already in your organization that you are. Then what will your comrades do? How long are you going to live? So you see, you have no choice. You’re ours, and you’re going to cooperate with us. The way that that effectiveness will be ensured is that you have multiple agents within the same organization who don’t know who each other are. That’s very important. They’ll all be filing reports. So let’s say you have three agents inside the central committee of the SR party, and there’s a committee meeting, and you’re going to look at the reports they file. They all better agree with each other. If one person doesn’t report what the other two do, then perhaps they’re not entirely doing their job and they can be liquidated at any time. All you do is drop the dime on them.

(00:25:18)
This was done periodically. In fact, in some cases, you would betray your own agents just to completely discombobulate to the organization. This happened in one particular case around 1908, the fellow who was the head of the chief revolutionary terrorist organization, which wasn’t Bolshevik, but the so-called socialist revolutionaries. Actually the biggest revolutionary party, the SRs, who aren’t even actually Marxists more anarchists, but they went all in for the propaganda, the deed. They really like blowing people up and carried out quite a campaign of terrorism. The fellow who was the head of that terrorist organization was a fellow by name of Yevno Azef. Yevno Azef was, guess what? An Okhrana agent. Everything he did, every assassination that he planned, he did in consultation with his control. So he’d kind of run out his string. There was increasing suspicion of him.

(00:26:23)
He was also asking for a lot more money. So the Okhrana itself arranged to have him ride it out. What did that do? Well, what do you do in your party when you find out the chief of your terrorist brigade was a secret police agent. It’s consternation and mistrust. Nobody in the party would ever trust, and you couldn’t tell who you were sitting around. I know that a fellow I wrote a biography on Boris Sevenkov who was a Russian revolutionary and the second in command within the terrorist organization. By the way, the guy that wanted Azef’s job so bad he could taste it, well, on the one level, he expressed absolute horror that his boss was a police agent, and well, he should, because Sevenkov was a police agent too. See, they already had the number two waiting in the wings to take over, but he was legitimately shocked. He didn’t really suspect that.

(00:27:23)
So it’s a way of manipulating this. Then finally, we come to the E. That I think is the most important, ego. Sometimes people spy or betray because of the egotistical satisfaction that they receive, the sheer kind of Machiavellian joy in deceit. An example of that would be Kim Philby, one of the Cambridge five. Now, Philby was a communist, and he would argue that he always saw himself as serving the communist cause. But he also made this statement, I think it’s in the preface to his autobiography, and he says, one never looks twice at the offer of service in elite force. He’s talking about his recruitment by the NKVD in the 1930s, and he was absolutely chuffed by that.

(00:28:21)
The mere fact that they would want him, what he considered to be a first-rate organization would want him, satisfied his ego. If I was to take a guess as to whether it was ideological motivation, whether it was the romance of communism or whether it was the appeal of ego that was the most important in his career of treason, I’d go with ego. I think that figures into a lot. Someone doesn’t get the promotions that they wanted. Again, if you look at something like Aldrich Ames career in particular, you’ve got these … his career in the CIA was hit or miss.

(00:29:08)
He didn’t get the postings or promotions that he wanted his evaluation. He never felt that he got credit for doing that. That’s the type of thing that tends to stick in someone’s craw and can lead for egotistical reasons an added incentive to betray.
Lex Fridman
(00:29:24)
Yeah, that there’s a boost to the ego when you can deceive, sort of not play by the rules of the world and just play with powerful people like they’re your pawns.
Rick Spence
(00:29:36)
You’re the only one that knows this. You’re only the only one that knows that the person who is sitting across from you to which you have sworn your loyalty, you’re simultaneously betraying. What a rush that must be for some people.
Lex Fridman
(00:29:51)
I wonder how many people are susceptible to this. I would like to believe that the people, a lot of people have the integrity to at least withstand the money and the ideology, the pull of that and the ego.
Rick Spence
(00:30:08)
It can also be a combination of the two. You can create a recipe of these things, certain amount of money, ego and a little push of coercion that if you don’t, we’ll rat you out. You’ll be exposed.

CIA spies vs KGB spies

Lex Fridman
(00:30:27)
What are some differences to you as we look at the history of the 20th century between the Russian intelligence and the American intelligence in the CIA?
Rick Spence
(00:30:36)
If you look at both the Okhrana and the KGB, one of the things that you find consistent is that a single organization handled foreign intelligence that is spying upon enemy or hostile governments and also internal security. So that’s all part of it. Whereas if you look at the US models that evolved, you eventually have the FBI under Hoover, who insists that he’s going to be the counterintelligence force. If there are commie spies running around America, it’s the FBI who’s supposed to ferret them out. The CIA is not supposed to be involved in that. The Charter, the basic agreement in 1947, did not give the CIA any … It’s often said they were barred from spying on Americans, which isn’t quite true. You can always find a way to do that. What they don’t have is they don’t have any police or judicial powers.

(00:31:34)
They can’t run around in the country carrying guns to use on people. They can’t arrest you. They can’t interrogate you, they can’t jail you. They have no police or judicial powers. Now, that means they have to get that from someone else. That doesn’t mean that other agencies can’t be brought in or local police officials, corn or whatever you need you can eventually acquire. But they can’t do that directly. So you’ve got this division between foreign intelligence and domestic counterintelligence often split between hostile organizations. The relationship between the FBI and the CIA, I think it’s fair to say, is not chummy, never has been. There’s always been a certain amount of rivalry and contention between the two. It’s not to say that something like that didn’t exist between the domestic counterintelligence and foreign intelligence components of the KGB, but there would be less of that to a degree, because there was a single organization.

(00:32:42)
They’re all answerable to the same people. So that gives you a certain greater amount, I think, of leeway and power because you’re controlling both of those ends. I remember somebody telling me once that, and he was a retired KGB officer. There you go, retired. One of the things that he found amusing was that in his role, one of the things that he could be is that he could be anywhere at any time in any dress, which meant that he could be in or out of uniform and any place at any time. He was authorized to do that.
Lex Fridman
(00:33:26)
So more freedom, more power.
Rick Spence
(00:33:29)
I think one of the things that you would often view is that, well, the Russians are simply naturally meaner. There’s less respect for human rights. There’s a greater tendency to abuse power that one might have. Frankly, they’re all pretty good at that. It is fair to say that there’s probably some degree of cultural differences that are not necessarily for institutional reasons, but cultural reasons. There could well be things that Americans might balk at doing more than you would find on the Russian or Soviet side of the equations. The other aspect of that is that Russian history is long and contentious and bloody.

(00:34:22)
One of the things it certainly teaches you never trust foreigners. Every foreign government anywhere, any country on your border is a real or potential enemy. They will all, at some point, if given the chance, invade you. Therefore, they must always be treated with great suspicion. It goes back to something that I think the British observed was that countries don’t have friends, they have interests, and those interests can change over time.
Lex Fridman
(00:34:54)
Well, the CIA is probably equally suspicious of all other nations.
Rick Spence
(00:34:58)
That’s your job. You’re supposed to be suspicious. Your job is not to be trusting. Yeah, the basic job of an intelligence-
Rick Spence
(00:35:00)
… your job is not to be trusting. Yeah. The basic job of an intelligence agency is to safeguard your secrets and steal the other guys’ and then hide those away.
Lex Fridman
(00:35:10)
Are there laws, either intelligence agencies that they’re not willing to break? Is it basically lawless operation to where you can break any law as long as it accomplishes the task?
Rick Spence
(00:35:24)
Well, I think John le Carre, give his pen name, was talking about his early recruitment into British intelligence. And one of the things he remembered being told up front was, “If you do this, you have to be willing to lie and you have to be willing to kill.” Now, those are things that in ordinary human interactions are bad things. Generally, we don’t like it when people lie to us. We expect that people will act honestly towards us, whether that’s being a businessman you’re involved with, your employers. We’re often disappointed in that because people do lie all the time for a variety of reasons, but honesty is generally considered to be. But in a realm where deception is a rule, dishonesty is a virtue. To be good at that, to be able to lie convincingly is good. It’s one of the things you need to do.

(00:36:32)
And killing also is generally frowned upon. Put people in prison for that, they’re otherwise executed. But in certain circumstances, killing is one of those things that you need to be able to do. So what he felt he was being told in that case is that once you enter this realm, the same sort of moral rules that apply in general British society do not apply. And if you’re squeamish about it, you won’t fit in. You have to be able to do those things.

Assassinations and mind control

Lex Fridman
(00:37:03)
I wonder how often those intelligence agencies in the 20th century, and of course the natural question extending it to the 21st century, how often they go to the assassination, how often they go to the kill part of that versus just the espionage.
Rick Spence
(00:37:21)
Let’s take an example from American intelligence, from the CIA 1950s, 1960s into the 1970s, MKUltra. That is a secret program which was involved with what is generally categorized as mind control, which really means messing with people’s heads. And what was the goal of that? Well, there seemed to have been lots of goals. But there was an FBI memo that I recently acquired quite legally, by the way, it’s declassified, but it’s from 1949. So this is only two years after the CIA came into existence. And it’s an FBI memo because the FBI, of course, very curious what the CIA is up to and the FBI are not part of this meeting, but they have someone, they’re sort of spying on what’s going on. So there was a meeting which was held in a private apartment in New York. So it’s not held in any kind of, it’s essentially never really happened because it’s in somebody’s house. And there are a couple of guys there from the CIA. One of them is Cleve Backster. Cleve Backster is the great godfather of the lie detector. Pretty much everything that we know or think we know about lie detectors today, you owe to Cleve Backster. He’s also the same guy that thought that plants could feel, which somehow was a derivative of his work on lie detectors. So these guys are there and they’re giving a talk to some military and other personnel. And there’s certain parts of the document which are of course redacted, but you could figure out what it is that they’re talking about. And they’re talking about hypnotic suggestion and all the wonderful things that you can potentially do with hypnotic suggestion. And two of the things they note is that one of the things we could potentially do is erase memories from people’s minds and implant false memories. That would be really keen to do that, just imagine how that would be done. So here to me is the interesting point. They’re talking about this in 1949. MKUltra does not come along until really 1953. Although there are all sorts of Artichoke and others, everything is sort of leading up to that. It’s simply an elaboration of programs that were already there. I don’t think that it ultimately matters whether you can implant memories or erase memories. To me, the important part is they thought they could and they were going to try to do it. And that eventually is what you find out in the efforts made during the 1950s and ’60s through MKUltra, MKSearch, MKNaomi and all the others that came out. That’s one of the things they’re working for. And among the few MKUltra era documents that survived, there’s that whole question is that could you get someone to put a gun to someone’s head and pull the trigger and then not remember it later. Yeah, you could, interestingly enough.
Lex Fridman
(00:40:35)
So non-direct violence, controlling people’s minds, controlling people’s minds at scale and experimenting with different kinds of ways of doing that.
Rick Spence
(00:40:45)
One person put it that the basic argument there or the basic thing you’re after was to understand the architecture of the human mind, how it worked, how it put together, and then how you could take those pieces apart and assemble them in different ways. So this is where hypnosis comes in, which was then, still is, fairly spooky thing. Nobody’s ever explained to me exactly what it is. The idea was that could, you think the whole possibilities in this case, could you create an alternate personality and use that alternate personality in an agent role, but then be able to turn it on and off.

(00:41:29)
So subsequently, the person which that personality inhabited was captured and interrogated, tortured, had their fingernails torn out, they would have no memory of it. They couldn’t give any kind of secret away because it was embedded in some part of their brain where there was a completely different person. You can just imagine the possibilities that you can dream up. And again, it’s not, I think, the question is to whether that is possible or whether it was done, although I suspect that both of those are true, but that you would try to do it. Then imagine the mischief that comes out of that. And one of the big complaints from a legal standpoint about MKUltra and the rest is that you were having medical experiments essentially being carried out on people without their knowledge and against their will, which is a no-no.
Lex Fridman
(00:42:25)
Yeah. The fact that you’re willing to do medical experiments says something about what you’re willing to do. And I’m sure that same spirit, innovative spirit, persists to this day. And maybe less so, I hope less so, in the United States, but probably in other intelligence agencies in the world.
Rick Spence
(00:42:50)
Well, one thing that was learned, and the reason why most MKUltra and similar records were destroyed on order in the early ’70s, around the time the CIA became under a certain amount of scrutiny. The mid ’70s were not a good time for the agency because you had the church committee breathing down their neck, you had all of these… People were asking lots of questions. So you need to dump this stuff because there’s all kinds of, because you are committing crimes against American citizens, so let’s eradicate it. And the important lesson to be learned is that never do these type of thing again where at least in any way in which the agency’s direct fingerprints are placed on it. You can pay people. You can subsidize research. You can set up venture capital firms. You got plenty of money and you can funnel that money into the hands of people who will carry out this research privately. So if something goes wrong, you have perfect deniability.

Jeffrey Epstein

Lex Fridman
(00:43:57)
On the topic of MICE, on the topic of money, ideology, coercion and ego, let me ask you about a conspiracy theory. So there is a conspiracy theory that the CIA is behind Jeffrey Epstein. At a high level, if you can just talk about that, is that something that’s at all even possible? That you have, basically this will be for coercion, you get a bunch of powerful people to be sexually mischievous and then you collect evidence on them so that you can then have leverage on them.
Rick Spence
(00:44:31)
Well, let’s look at what Epstein was doing. He was a businessman who then also developed a very lucrative sideline in being a high-level procurer basically in supplying young girls. And he also filmed much of that activity. I think his partner in this, Ghislaine, and I’m hope I’m pronouncing her name correctly.
Lex Fridman
(00:45:03)
I think it’s Ghislaine.
Rick Spence
(00:45:03)
Ghislaine?
Lex Fridman
(00:45:05)
Yeah.
Rick Spence
(00:45:05)
Well, I’ve heard it both ways Ghislaine or Ghislaine, whichever it may be, I think her argument at one point was that, “Well, we did this to protect ourselves.” But this type of thing has been done before, there’s nothing new about this. Getting influential people in compromising situations and filming them. I could give you another historical example of that. In late 1920, actually early-1930s, just pre-Nazi Berlin, there was a very prominent sort of would-be psychic and occultist by the name of Erik Jan Hanussen. He had a private yacht, I think it was called the Seven Sins. And he hosted parties. He also had a whole club called the Palace of the Occult, which hosted parties where things went on. And there were cameras everywhere. He filmed important people, guys like the brownshirt chief of Berlin in various states of undress and sexual congress. And he did that for the purposes of blackmail.

(00:46:11)
So in Epstein’s case, he is a procurer of young girls to wealthy men largely. And many of those events were recorded. Now, even if it wasn’t his intention to use them for blackmail, think of what someone else could do it because people know about this. So you could raise a question Epstein is just kind of a greedy pervert, but through his greedy perversion, he’s now collecting information that could be useful. Who could that be useful to? Who would like dirt on Prince Andrew? Think of all the people who were there and there were important people who went to Lolita Island. So if it isn’t Epstein directly, he might have been being, I’m not trying to let him off the hook because they have anything for him, he was either running his own blackmail business or someone was using him as a front for that. I think we’re kidding ourselves if we’re trying to pretend that’s not what was going on.
Lex Fridman
(00:47:24)
So you think, EU and American intelligence agencies would be willing to swoop in and take advantage of a situation like that?
Rick Spence
(00:47:33)
Well, you know-
Lex Fridman
(00:47:36)
Just in the case.
Rick Spence
(00:47:36)
American politicians could ultimately end up in a position to oversee things like intelligence budgets. One of them might even become director. You’re never know. He can never tell what some crazy president might do. It could be very, one of the guys who understood was J. Edgar Hoover, J. Edgar Hoover spent a long time collecting dossiers on politicians. How do you think he’d remain director of the FBI as long as he did? Because he systematically collected dirt on people. So there is a history of this type of thing. And again, you could argue that’s partly for his protection, to keep his job, to protect the sanctity and security of the Bureau. You can find a million different ways to justify that.
Lex Fridman
(00:48:28)
That’s really dark.
Rick Spence
(00:48:31)
Well, there is that side to human nature, let’s put it that way.
Lex Fridman
(00:48:37)
Whether it’s the CIA or the Okhrana, maybe that’s what the President of the United States sees when they show up to office is all this stuff they have on him or her and say that there’s a internal mechanism of power that you don’t want to mess with and so you will listen, whether that internal mechanism of power is the military industrial complex or whatever, the bureaucracy of government.
Rick Spence
(00:49:02)
Contacts with the deep state.
Lex Fridman
(00:49:04)
The deep state.
Rick Spence
(00:49:05)
Entrenched, bureaucratic. Well, it’s been said and I think it’s generally true, that bureaucratic creatures are like any other creatures. It basically exists to perpetuate itself and to grow.
Lex Fridman
(00:49:16)
Yeah.
Rick Spence
(00:49:17)
Nobody wants to go out of business. And of course, you get all of these things like Pizzagate and accusations of one form or another. But here’s an interesting thing to consider. Okay. And I want to argue that I’m not saying that Pizzagate in any way was real or QAnon, anything, but where do they get these ideas from? So let’s ask ourselves, do pedophiles exist? Yeah. Do organized pedophile organizations exist? Yeah, they share information, pictures, they’re out there on the dark web, they cooperate. So does child trafficking exist? Yeah, it does. So in other words, whether or not specific conspiracy theories about this or that group of organized pedophile cultists is real, all the ingredients for that to be real are there. Pedophiles exist, organized pedophilia exists, child and human trafficking exists. At some point, at some time, someone will put all of those together. In fact, certainly, they already have.
Lex Fridman
(00:50:42)
We’ll jump around a little bit.

Bohemian Grove

Rick Spence
(00:50:43)
Yeah.
Lex Fridman
(00:50:43)
But your work is so fascinating and it covers so many topics. So if we jump into the present with the Bohemian Grove and the Bilderberg group.
Rick Spence
(00:50:54)
Bilderbergers.
Lex Fridman
(00:50:56)
So the elites, as I think you’ve referred to them. So these gathering of the elites, can you just talk about them? What is this?
Rick Spence
(00:51:06)
Well, first thing I have to point out is that Bohemian Grove is a place, not an organization, it’s where the Bohemian Club meets. It’s that 2,700 acre, old-growth redwoods near north of San Francisco. The Bohemian Club began, I think it went back in the 1870s. Its initial members were mostly journalists. In fact, supposedly the name itself comes from, it was a term for an itinerant journalist who moved from paper to paper was called a bohemian. And although I think there may be other reasons why that particular term was chosen as well. But I think the original five members, there were three journalists, there was a merchant and there was a vintner, guy owned a vineyards, California. How surprising? None of them terribly wealthy, but they formed an exclusive men’s club, was and still is. And nothing terribly unusual about that at the time. But it became fashionable. And as it became fashionable, more wealthy people wanted to become part of it. And the thing about getting rich guys to join your club is what do rich guys have? Money. And of course, it’s one of those rich guys that bought Bohemian Grove where now you build your old boys summer camp, which is what it is. They got cabins with goofy names. They go there, they perform skits, they dress up in costumes.
Lex Fridman
(00:52:36)
Yeah.
Rick Spence
(00:52:37)
True. Some of those skits look like pagan human sacrifices, but it’s just a skit. What’s really going on there? So on the one hand you can argue, look, it’s a rich guy’s club. They like to get out there. The whole motto of the place is weaving spiders come not here. So we’re going to talk about in business. We just want to get out into the woods, put on some robes, burn a couple of effigies in front of the owl, have a good time, probably get drunk a lot.
Lex Fridman
(00:53:06)
What’s with the robes? Why do they do weird creepy shit? Why do they put on a mask and the robe and do the plays and the owl and then sacrificing, I don’t know, whatever?
Rick Spence
(00:53:19)
Why do you have a giant owl?
Lex Fridman
(00:53:21)
Exactly.
Rick Spence
(00:53:22)
Why do you do that?
Lex Fridman
(00:53:23)
What is that in human nature because I don’t think rich people are different than not rich people, what is it about wealth and power that brings that out of people?
Rick Spence
(00:53:33)
Well, part of it is the ritual aspect of it. And yeah, that clearly is a ritual. Rituals are pretty simple. Rituals are just a series of actions performed in a precise sequence to produce an effect. That describes a lot of things. It describes plays, symphonies, every movie you’ve ever seen. A movie is a ritual. It is a series of actions carried out in a precise sequence to produce an effect with an added soundtrack to cue you to what emotions you’re supposed to be feeling.
Lex Fridman
(00:54:06)
It’s a great idea. So the rich people should just go to a movie or maybe just go to a Taylor Swift concert. Why do you have to, why the owl thing?
Rick Spence
(00:54:16)
Part of it is to create this kind of sense, I suppose, of group solidarity. You’re all going to appear and also a way of transcending yourself in a way. When you put on the robe, it’s like putting on a uniform. You are in some way a different or more important person. It’s a ritual. Okay. The key ritual at Bohemian Grove is a thing called the cremation of care. And that’s what it’s supposed to be. “We’re going to put all of our, we’re rich, important people. We have to make all of these critical decisions. Life is so hard. So we’re going to go out here in the woods and we’re going to kick back and we’re all going to gather around the lake and then we’re going to carry,” it’s wicker, it’s not a real person. And how would you know? “And this is the cremation of our care,” but it’s a ritual which is meant to produce a sense of solidarity and relief among those people who are there.

(00:55:18)
The question comes down with the rituals as how seriously do you take them? How important is this to the people who carry them out? And the interesting answer to that is that for some people it’s just boring. There are probably people standing around the owl who think this is ridiculous and can’t wait for it to get over with. There are the people that are kind of excited about it, get caught up into it, but other people can take it very seriously. It’s all the matter of the intention that you have about what the ritual means. And I don’t mean to suggest by that that there’s anything necessarily sinister about what’s going on, but it is clearly a ritual carried out for some kind of group reinforcing purpose. And you’re absolutely right. You don’t have to do it that way. I’ve gone to summer camps and we never carried out mock sacrifices in front of an owl. We did all those other things. We didn’t even have any robes either. So it goes beyond merely a rich guy summer camp, although that’s an aspect of it.

(00:56:29)
But it also I think often obscures, focusing on Bohemian Grove at the getaway of the club, ignores that the club is around all the time. That’s what’s at the center of this, it is the club and its members. So despite all the talk about no weaving spiders coming around here, one of the other features of the summer meeting are things called lakeside talks. And this, often people are invited to go there. And one of the people who was invited, I think around 1968, was Richard Nixon who was making his political comeback. And he was invited to give a talk where very important people are listening. And Nixon in his memoirs, realized what was going on. He was being auditioned as to whether or not he was going to be [inaudible 00:57:19], he recognized that that was really the beginning of his second presidential campaign. He was being vetted.

(00:57:27)
So one of the main theories, call it a conspiracy theory or not, about the Bohemian Club and the gatherings, is that people of wealth and influence gather together and whether or not it’s part of the agenda or not, inevitably you’re going to talk about things of interest. But to me, the mere fact that you invite people in, political leaders, to give lakeside talks means that there are weaving spiders which are going on and it is a perfect private venue to vet people for political office.
Lex Fridman
(00:58:04)
Yeah, where else are you going to do it, if you are interested in vetting, if you are interesting and powerful people selecting?
Rick Spence
(00:58:10)
Well see, here’s the question. Are these guys actually picking who’s going to be president? Is that the decision which is being made or are they just deciding what horses they’re going to back?
Lex Fridman
(00:58:21)
Right.
Rick Spence
(00:58:22)
I think the latter is the simpler version of it, but it doesn’t mean it’s the other way around. But these are the kinds of, Nixon was, there was the whole 1960 thing. So he’s the new Nixon, remember, and this is where the new Nixon apparently made a good impression on the right people because he did indeed get the Republican nomination and he did indeed become president.
Lex Fridman
(00:58:49)
Well, there could also be a much more innocent explanation of really it’s powerful people getting together and having conversations and through that conversation, influencing each other’s view of the world and just having a legitimate discussion of policies, foreign policy.
Rick Spence
(00:59:06)
Why wouldn’t they? Why would you assume that people are not going to do that?
Lex Fridman
(00:59:10)
It’s the owl thing with the robes.
Rick Spence
(00:59:13)
Why the owl and why the robes?
Lex Fridman
(00:59:17)
Which is why it becomes really compelling when guys like Alex Jones, forgive me, but I have not watched his documentary, I probably should at some point, about the Bohemian Grove where he claims that there is a Satanist human sacrifice of, I think, children. And I think that’s quite a popular conspiracy theory. Or has lost popularity, it kind of transformed itself into the QAnon set of conspiracy theories. But can you speak to that conspiracy?
Rick Spence
(00:59:54)
Let’s put it this way, the general public rich people are inherently suspicious.
Lex Fridman
(00:59:57)
Yeah. Great.
Rick Spence
(00:59:58)
Let’s put it that way. First of all, they’ve got all that money. And exactly how did one obtain it? And I do not of necessity adhere to the view that behind every great fortune there is a great crime, but there often are. There are ways in which it’s acquired. But I think it’s one of the things I think that can happen is particularly when people acquire a huge amount of money, and I won’t name any names, but let’s say there are people who perhaps in the tech sphere who coming from no particular background of wealth, suddenly find themselves with $600 billion. Whoa. This is the question you would have to ask yourself. Why me? Because you’re one of the rare, tiny group of human beings who will ever have that kind of wealth in your hands. Even if you are a convinced atheist, I think at some point, you have to begin to suspect that the cosmic muffin, providence, whatever it is, put this money in your hands to do what? Achieve great things. Just think of all the stuff.

(01:01:08)
So you’re going to start a foundation and you’re going to start backing all the things that you like. I think there’s an element of ego that comes in with it as well. And again, it may not be so much what the rich person with a huge amount of money at their disposal and a lot of fuzzy ideas about what to do with it can be influenced by others. It’s always that question as to who is actually manipulating these events? What’s going on in that regard? In some way, they can be a very useful sucker. Find somebody with a lot of money and get them to finance the things that you want them to do.

(01:01:59)
The Bohemian Club is I don’t think in and of itself inherently evil or sinister, but it means that there are lots of different people in it who have different agendas. It goes back to what I said about how somebody feels about the cremation of care ritual. This is either just a waste of time, it’s just some sort of silly thing that we’re doing or it’s something of great importance. Perhaps even mystical or religious importance. Because that’s ostensibly what it’s pretending to be. There’s always this question as to what degree you begin to play and the play becomes serious. That tends to happen a lot.

Occultism

Lex Fridman
(01:02:43)
You’ve studied a lot of cults and occultism, what do you think is the power of that mystical experience?
Rick Spence
(01:02:52)
Well, what is broadly referred to… Well, we get into what’s occultism, what’s the occult? The occult is the hidden, that’s all it really means. Specifically, hidden from sight. And the basis of it is the idea that what is hidden, well, what is hidden from us is most of the world, most of reality. So the basic concept within occultism, the basic concept within most religions, which are approved forms of occultism, is that the world, the physical world that we are aware of is only a very small part of a much larger reality. And that what the methods and practices of occultism arguably do is to allow someone to either enter into this larger reality or to access that larger reality for purposes to be exploited here. The most interesting statement about and a key element of this becomes the thing called magic.

(01:03:58)
Now, we all know magic, it’s a guy standing on stage performing a trick. But the interesting thing about a stage magician is that a stage magician is we know when we’re watching it that it’s a trick, yet we can’t really figure out, if he does it well, how that trick is being accomplished because it seems to defy physical laws. And that’s fascinating about it. So even though it’s a trick, if you can’t figure it out, it has this kind of power of fascination. But it’s mimicking something. Stage magic is mimicking real magic. So what’s real magic. Well, let’s go back to Aleister Crowley because he always has to come. I knew he was going to come up at some point in this, earlier than not, because he always does.
Lex Fridman
(01:04:51)
All roads lead to Aleister.
Rick Spence
(01:04:52)
All roads lead to Aleister Crowley. Aleister Crowley and I’ve said this enough that I should be able to get it right, but I’m paraphrasing here, he goes, ” Magick,” which of course her spelled with a K or CK, “is the art and science of causing change to occur in conformity with will?” So in a way, that’s sort of mind over matter. But it’s the idea that one can through will, through intention bend reality to make something happen. Somebody once put it this way, it’s tipping the luck plane. So you got some kind of a level plane. What we’re just trying to do is just tip it just a little bit so the marble rolls over one side or to another. Now that presupposes a lot of things, that is there a luck plane? I don’t know. But it’s a good sort of idea to have. And here again, don’t become overly bothered trying to figure out whether you actually can bend reality, become bothered by the fact that there are people who believe that they can and will go to great efforts to do so and will often believe they have succeeded.

(01:06:19)
So it’s this effort to make things occur in a particular way, maybe just to sort of nudge reality in one little way or another. And that’s where things like rituals come in. Rituals are a way of focusing will and intention. We’re all there. We’re all thinking about the same thing. And you have to imagine just how the pervasiveness of what could be called that kind of magical thinking every day is everywhere. So let me give you an example. You ever attended a high school football pep rally? Think of what’s going on there. Okay, your team is going to battle the other team. You’ve now assembled everyone in the gymnasium. You’ve got people who are dancing around in animal totem costumes. And what are you chanting? Everyone is supposed to chant that the other team dies, that you’ll be horribly defeated and that our team will be victorious.

(01:07:21)
That is a magic ritual. The idea is it becomes into this idea that’s very popular today about visualizing things, visualizing, manifesting. I love this term. You need to manifest your success. Well, that’s just magic. That is trying to cause change in conformity with will. So these things can happen without you being even consciously aware of what’s going on. And you don’t need to be because if you’re all a part of a mob, which is there in the gymnasium and you get into this and you get worked up and a cultist would argue what you’re doing is you’re creating a huge amount of energy. All of these people are putting energy into something and that energy goes somewhere. And maybe you can. Maybe, just maybe, you actually can slightly increase the chances of your team’s victory. Of course, your opponents are having their own ritual at the same time. So whoever has the bigger mojo will apparently win on the team.
Lex Fridman
(01:08:30)
So I would say trivial example of that, but a clear one. I do believe that there’s incredible power in groups of humans getting together and morphing reality. I think that’s probably one of the things that made human civilization what it is. Groups of people being able to believe a thing and bring that belief into reality.
Rick Spence
(01:08:54)
Yes, you’re exactly right. Bring to conceive of something and then through intention, will, to manifest that into this realm.
Lex Fridman
(01:09:07)
And of course, that power of the collective mind can be leveraged by charismatic leaders to do all kinds of stuff, where you get cults that do horrible things or anything.
Rick Spence
(01:09:24)
There might be a cult that does good things. I don’t know. It depends.
Lex Fridman
(01:09:27)
We usually don’t call those cults.
Rick Spence
(01:09:27)
We don’t call those cults.
Lex Fridman
(01:09:29)
Exactly. A hundred percent.
Rick Spence
(01:09:31)
Without endorsing this entirely and interesting, one of the questions, what’s the difference between a cult and a religion? And it has been said that in the case of a cult, there’s always someone at the top who knows what’s going on, generally, who knows it’s a scam. In a religion, that person is dead. So see, I’ve just managed to insult every single religion. But it’s an…
Rick Spence
(01:10:00)
… Insult every single… But, it’s an interesting way of thinking about it, because I think there is some degree of accuracy in that statement.
Lex Fridman
(01:10:11)
Actually, the interesting psychological question is, in cults, do you think the person at the top always knows that it’s a scam? Do you think there’s something about the human mind where you gradually begin to believe it?
Rick Spence
(01:10:24)
Begin to believe your own bullshit?
Lex Fridman
(01:10:25)
Yeah.
Rick Spence
(01:10:26)
Yes.
Lex Fridman
(01:10:27)
That seems to be-
Rick Spence
(01:10:28)
That, again, is part of magic, I think, is believing your own bullshit. It doesn’t necessarily mean that the head of the cult realized, but there’s someone, maybe the second… I always look in the lieutenant, someone probably has an idea about what’s going on. The other thing that seems to be a dead giveaway for what we would call a cult is what’s called excessive reverence for the leader. People just believe everything these people say. To give you an example, the first time I ever encountered anything like that was in Santa Barbara, California in the 1970s. I was going to grad school. And there was a particular cult locally, I think it was Brotherhood of the Son. And, it was the same. So there was some guy who… Among the other things, followers were convinced to hand over all their money and personal belongings to him. I believe he used part of that money to buy a yacht with. Anyway. A lot of it went to him.

(01:11:40)
And then, of course, working for free upon different cult-owned business enterprises, of which there were several. And there was a person I knew who became a devoted follower of this, and all I could think of at one point was ask them, “What the hell is the matter with you? I mean, have you lost your mind? What is it that this person can possibly be providing that you essentially are going to become a slave to them?” Which is what they were doing. And I actually give that credit in a way of sparking my whole interest in things like secret societies. And here, again, as a disclaimer, I am not now, nor have I ever been the member of any fraternal organization, secret society, or cult that I know of. And that’s what interests me about them, because I’m just always trying to figure out why people do these things. Like I said, why the robes and the owl? Why?
Lex Fridman
(01:12:43)
… Yeah.
Rick Spence
(01:12:44)
Why do you do that? And, it’s trying to figure it out. I mean, I couldn’t even hack the boy scouts. Okay? That was too much. Because to me, you join an organization and the first thing that comes along is there are rules and someone is telling you what to do. Okay? I don’t like people telling me what to do. Spent much of my life trying to avoid that as much as possible. And, join a cult, there’s going to be someone telling you what to do. Join the Bohemian Club, and there’s going to be someone telling you what to do. Obviously, a lot of people really get something out of that. In some ways, it’s necessary for them to function. But I do not understand it and my study of it is a personal error to try to understand why people do that.
Lex Fridman
(01:13:33)
And there are so many reasons, primary of which I would say is the desire in the human heart to belong. And, the dark forms that takes throughout human history. Recent history is something I’d love to talk to you a bit about. If we can go back to the beginning of the 20th century on the German side, you’ve described how secret societies like The Thule Society lay the foundation for Nazi ideology. Can you, through that lens, from that perspective, describe the rise of the Nazi party?

Nazi party and Thule society

Rick Spence
(01:14:10)
Well, I guess we could start with what on earth is The Thule Society? So The Thule Society was a small German occult society. That is, they studied metaphysics, another fancy word for occultism, that appeared in Munich around 1917, 1918. The key figure behind it was a German esotericist by the name of Rudolf von Sebottendorff. Okay, not his real name. His real name was Adam Rudolf Glauer. He was adopted by a German nobleman and got the name von Sebottendorff, and I like to say that name.

(01:15:02)
So, I have this real thing about vague, mysterious characters who show up and do things, and trying to figure out who these people are. So we’re working up the years prior to the first World War. So, the decade or so prior to World War I, he spends a lot of time in the Ottoman Empire, Turkey. There was none in the Ottoman Empire, which was a fairly tumultuous place, because in 1908 and 1909, there was the Young Turk Revolution. And, you had a military coup, which effectively overthrew the Ottoman Sultan and installed a military junta, which would go on during the first World War to make its greatest achievement in the Armenian Genocide. Eventually, it created a genocidal military regime which would lead the country into a disastrous first world war, which would destroy the Ottoman Empire, out of which modern Turkey emerges. Yada, yada, yada.
Lex Fridman
(01:16:06)
And by the way, we should take a tiny tangent here, which is, that you refer to the intelligence agencies as being exceptionally successful. And, here in the case of the Young Turks being also very successful in doing the genocide, meaning they’ve achieved the greatest impact, even though the impact on the scale of good to evil tends towards evil.
Rick Spence
(01:16:33)
It’s one of those things that often comes out of revolutionary situations. Revolutions always seek to make things better. Don’t they? “We’re going to take a bad old regime. The Sultan is…” And the Sultan was bad, I think it’s fair to say. Abdul Hamid II wasn’t called a red sultan because of his favorite color type of thing. And, the idea is that they were going to improve. The Ottoman Empire was a multinational empire. They were going to try to equalize and bring in the different groups. And, none of that happened. It became worse, in the same way that you could argue that the goal of Russian revolutionaries was to get rid of the bad old, incompetent, medieval Tsarist regime and to bring in a new great shining future. And it became even more authoritarian. And, the crimes of the Imperial Russian regime pale in significance of what would follow, in the same way that the crimes of Abdul Hamid pale when you get to the Young Turks.

(01:17:44)
But, that wasn’t necessarily the intention. But, von Sebottendorff is a German businessman who’s working in this period. And the whole point here is that the Ottoman Empire in this period is a hotbed of political intrigue and all kinds of interesting things about it. The Young Turk Revolution is essentially a military coup, but it is plotted in Masonic lodges. Okay? I know, technically Masonic lodges are never supposed to be involved in politics, but they are. Or, the lodge meeting breaks up, and then you plot the revolution. So, same group of people, but it’s not technically. But yes. And there’s the Macedonia Resorcia Lodge in Thessaloniki was ground zero for plotting this military coup that was supposed to improve the Empire. Sebottendorff is, in one way or another, mixed up in all of this, or at least he’s an observer. Plus, he’s initiated into the Masonic lodges.

(01:18:53)
And interestingly enough, the fellow initiates him into one of these eastern lodges is a Jewish merchant by the name of Termoodi, and who’s also a Kabbalist. And, Sebottendorff is very, very interested in the occult. He’s initiated into eastern Masonic lodges and a period when those same lodges are being used as a center for political intrigue. He also apparently is involved in gunrunning, which in revolutionary periods is there’s a lot of money to be made off of that. So he’s connected to various dark businesses in a tumultuous time with connections to politicized freemasonry and the occult. Now, in the course of the first World War, he returns to Germany. He just shows up. And, it would be my operative suspicion or theory that Sebottendorff was working for someone. I don’t think he just pops up in Munich on his own accord. Why does he leave the Ottoman Empire and return to that place? Who’s behind him? Now, maybe no one, but maybe someone, because he does seem to have money at his disposal. And he comes into Munich and he basically takes over this small occult study group.

(01:20:32)
Now, the interesting thing is that The Thule Society is really just a branch of another existing, what’s called, an Areosophist order, a thing called the German order, or the Germanic order, which is centered in Berlin. But for some reason, he doesn’t want his group to be connected by name with the Germanic order. So, Thule Society, Thule in this case, is a reference to supposedly a mythical Arctic homeland of the Aryan race. Apparently, they were all snow people who wander out of the snow at some point. It’s a frozen Atlantis. So I mentioned these people, the Areosophists, which, you have to practice saying that. So, what are they? Well, they’re a racist Germanic offshoot of Theosophy. And, I know I’m explaining one thing to explain something, but there’s no other way to do this.

(01:21:39)
So, Theosophy was 19th century very popular and widely modeled occult belief that was founded by a Russian woman by the name of Helena Blavatsky. She was a medium psychic, supposedly got channelings from the ascended masters. The basic story there, they’re all of the ascended masters, which are mystical beings that may or may not have once been human. They live inside the Himalayas or they float among them on a cloud, and they guide the spiritual evolution of humanity. What Blavatsky did was to take Western esotericism and blend it with Hindu and Buddhist esotericism, which became very, very sexy in the West, still is. Buddhism attracts a lot of people, because, well, it’s Buddhism, it’s different, see? So, the Mahatmas, the ascended masters were sending her messages, despite the fact that she was later proven pretty much to be a fraud and writing the letters herself. Nevertheless, people still went along with this doctrine, and it’s been widely modified and copied since then. So, an idea in Theosophy was that human spiritual evolution was tied to physical evolution.

(01:22:58)
In the case of Blavatsky, Blavatsky never said that Aryans, white people, anything out this superior. She talked about the different root races, but their version of it’s just gobbledygook that seems to include everyone in. I’d defy you to make much sense out of it. But, in the early 20th century, there were different… One of the things that became fashionable, not terribly popular, these are small movements, was the idea that, well, Germany is a new upcoming country, and part of this I think was really trying to define who the Germans were, because remember, the German Empire, Germany as a political state, doesn’t come until existence until 1871. Prior to that, Germany was a geographic expression, a vaguen, which described a large area in Central Europe where a lot of people who wore leather shorts or something like that and spoke similar German dialects were nominally Germans, but they might be Prussians or Bavarians. They came in all sorts of varieties in religion. There was no German identity.

(01:24:19)
Something very similar happened in Italy in this same period. I mean, there weren’t Italians, there were Sardinians, and there were Romans, and there were Sicilians. Umbrians spoke, again, dialects of a similar language, but had never lived, not since the Roman Empire under a single state and really didn’t think of themselves as the same. So you have to create this artificial thing. You have to create Germans. “There is now a Germany with an emperor. And so, we’re all going to be Germans.” Well, exactly what is that? Much of it is an artificial creation. You have to decide upon some standard dialect. Okay, we’ll decide what that is. Often dialect that only a few people actually speak, and then they will be drilled into children’s heads through state schooling programs. So I think this is the milieu that it comes out of. People were trying to figure out what on earth Germans actually were. And, the need for some common identity. And, that leads to everything like Wagnerian Opera. Richard Wagner wanted to create a German mythical music. So he went back and strip mined old German myths and cobbled them together into a lot of people standing on stage singing. And, that was his purpose. He was a nationalist. He was in many ways a racialist nationalist. And this was his idea of trying to create out of bits and pieces of the past, a newfangled form of German identity.

(01:25:57)
So, on the more mystical end of this, you had the ideas that, well, Germany must have been created for some special purpose, because the Germans must be very special people and we must have some particular destiny. And then, out of this, the direction this is heading, well, we’re all part of some master race with some ties to some great civilization in the past, call it Thule, call it whatever you want to be. They basically just invent things and try to attach those to the past. And so, Areosophy was the Areonized version of Theosophy. And what this did was to take the idea that spiritual and physical evolution had led to the most advanced form of human beings, which were the Aryans, and the most advanced group of them were, of course, the Germans. And, this attracted appeal.

(01:26:56)
Keep in mind, again, this was not a mass movement. This was very much a fringe movement. Most people weren’t aware of it and weren’t particularly interested in it, but it had an appeal for those who already had a esoteric bent in some form or another. And, this is where things like the Germanin order or the German order and their other groups, it was only one of many, grew out of. And, what it was that the Thule Society as a branch, The Thule Gesellschaft was supposed to do, was to study this. It was an esoteric study group. And so, people would get together and they’d talk about things, probably make more stuff up and all work around this idea of German Aryans as the most advanced human beings, and all the wonderful things that the future would hold.

(01:27:52)
And the fact that this was in the midst of a war in which Germany was, again, fighting, as they saw it, for its existence, heightened those tensions as well. So, my suspicion, again, is that Sebottendorff, in terms of who was behind him, that he was essentially called back to Germany to work either for the Prussian political police or for some aspect of German intelligence or security to try to mobilize occultism or esotericism for the war effort, because again, this is 1918, the war, it’s gone on way too long. Within a few months, Germany will collapse, and it will collapse simply from the psychological exhaustion of the population.
Lex Fridman
(01:28:48)
So this is almost to help the war effort with a propaganda, a narrative that can strengthen the will of the German people.
Rick Spence
(01:28:58)
Well, strengthen the will of some people.
Lex Fridman
(01:28:58)
Some people.
Rick Spence
(01:28:59)
You have to try to appeal to different aspects of this. But the mystical aspect is one of those things, it can have a very powerful influence. And the idea is that if we can come up with some mystical nationalism, maybe that’s one way to put it, a mystical nationalism that can be exploited for the… Because at this point you, you’re grasping at straws, and this is a whole period when the Germans are marshalling the last of their forces to launch a series of offensives on the Western front, the Peace Offensive, which will initially be successful, but will ultimately fail, and lead to a collapse in morale. But among the leadership of Germany, it was a recognition. It was that national morale was flagging. And, one of the other things that was raising its head was what had happened nearby a year… Well, the Russian Revolution, which had now brought the idea, which brought another solution to all of this, the idea of revolutionary Marxism. Here, we need to remind ourselves as to where Marxism comes from, not Russia, Germany. Where was the largest Marxist party? In Germany.
Lex Fridman
(01:30:17)
And Marx probably expected the revolution to begin in Germany.
Rick Spence
(01:30:23)
Where else?
Lex Fridman
(01:30:24)
I mean, the Soviet Union is not very industrialized. Germany is. And so, that’s where it would probably be.
Rick Spence
(01:30:29)
Russia, 5% of the population is industrial workers. In Germany, 40% of the population is industrial. So, if any place was made for Marxism, it was Germany. I think that’s why it caught on in East Germany so well, because it had come home. And, it was a local belief. It wasn’t something imported by the Russians. It was a German invention. One of the things you can see in this is The Thule Society was particularly involved in a anti-Marxist or anti-Bolshevik agitation. Sebottendorff saw them as this whole movement. It was a counter to this. It was a counter-Marxist movement.
Lex Fridman
(01:31:19)
Can we try to break that apart in a nuanced way? So, it was a nationalist movement. The occult was part of the picture, occult racial theories. So, there’s a racial component, like the Aryan race, so it’s not just the nation of Germany. And you take that and contrast it with Marxism. Did they also formulate that in racial terms? Do they formulate that in national versus global terms? How do they see this?
Rick Spence
(01:31:51)
Marxism formulates everything by class. Okay? People are categorized by class. You’re either part of the proletariat or you’re part of the bourgeoisie, or you’re either part of the proletariat or just some scum. Really, it needs to be swept into the dustbin of history. Only workers count. And, that was what would take someone who was a nationalist would drive them crazy, because their idea is, “We’re trying to create a German. People. We’re trying to create a common German identity.” But what the Marxists are doing is they’re dividing Germans against each other by class. German workers hate the German bourgeoisie. German proletariat as opposed to German capitalists. We’re all trying to fight this war together.

(01:32:38)
So, that was why Marxism, particularly in the form of Bolsheism, was seen as unpatriotic. And of course, was opposed to the war as a whole, the idea that parroting Lenin was that the war was an imperialist war. And the only thing that was good that was going to come out of it is that the imperialist war, through all of the crises it was creating, would eventually lead to a class war. And that would be good, because that would reconcile all of these things. But, think of the two very different versions of this, the Bolshevist version, or let’s just call it, the Marxist version of Germany, was going to be a class society in which we’re going to have to have some civil upheaval, which will have Germans fighting Germans.

(01:33:27)
Whereas, the mystical nationalism, the almost religious nationalism that Sebottendorff from The Thule Society had hitched its wagon to held that Germans are all part of a single racial family, and that’s what must be the most important thing. And that these can be different ways of trying to influence people. It comes down to a matter of political influence. So in a sense, I think that what Sebottendorff and The Thule Society was trying to do, at least within Munich, was to use this idea of mystical nationalism as a potential rallying point for some part of the population to oppose these other forces to keep people fighting. The war is lost though in November, the Kaiser abdicates, and essentially, the socialists do take over Germany. Things come very, very close to following the Russian model. And, you even get the Russian version or take on the Bolsheviks, which are the Spartacists who try and fail to seize power early on. But you do essentially end up with a socialist Germany.

(01:34:49)
And, that then leaves in the aftermath of the war. The Thule Society is sort of the odd man out, although they’re still very closely connected to the army. And here’s one of the things that I find interesting. When you get into 1919, who is it that’s paying Sebottendorff’s bills? It’s the army. The one thing the German army is absolutely determined to do is to preserve its social position and power. And they’re perfectly willing to dump the Kaiser to do that. This deal, which is made in November of 1918, Kaiser’s abdication, the proclamation of a German Republic, which, you just had this guy declare it. It wasn’t really planned. There’s the Ebert-Groner Pact. Groner is the chief of general staff at this point. Ebert is the chief socialist politician basically, and they make an agreement. And the agreement basically is that the Army will support Ebert’s government if Ebert supports the Army. And particularly that means the continuation of the Officer Corps and the general staff in one form or another. So a deal is made. And that of course, is what will eventually help defeat the Spartacist uprising.
Lex Fridman
(01:36:21)
Now, was the Army doing the similar things that we’ve talked about with the intelligence agencies, this same trying to control the direction of public power?
Rick Spence
(01:36:32)
The German intelligence landscape in the first World War is obscure in many ways. There are lots of things that are going on. Germany has a military intelligence service called Abteilung or Section IIIB. That’s just plain military intelligence. They’re constantly trying to collect military information before the war about the weaponry and plans of the enemies. And then, about what the operational plans were during the war. It doesn’t really go much beyond that though. The German foreign office runs a political intelligence service, and that’s the one which is much more involved in things like subsidizing subversion in Russia, which is one of the things that the Germans sign on to fairly early. Little diversion here in 1915, there is a Russian revolutionary who’s lived much of his life in Germany, who goes by the code name of Parvis. And, he essentially comes to the Germans in Constantinople, interestingly enough, in Turkey, he’s hanging around there at the same time as Sebottendorff is there, which I find curious.

(01:37:55)
So, Parvis or Alexander Helpant to give his actual name, comes to them and he goes, “Look, there’s a lot of revolutionaries in Russia and there’s a lot of mistrust with the regime. We think that the war will increase the contradictions in Russian society. And, if you give me a lot of marks, I can finance this revolutionary activity. And through subversion, I can take Russia out of the war.” Well, the Germans are facing a two-front war. That sounds great. “We’ll use money in order to…” But notice what they’re doing. The German general staff, a very conservative organization, not a bunch of revolutionaries, are going to finance revolution in an opposing country. They’re going to finance revolutionary subversion to take Russia out of the war, which basically works. So that gives you another idea as to what the German military is willing to do. They’re not revolutionaries, but they’ll pay revolutionaries to subvert another regime. Now, you’ve got the problem, is that, the revolutionary regime that your money helped bring to power is now threatening to extend into your country.

(01:39:19)
So, the whole question for the Army and for others in Germany in 1919 is how to keep Germany from going Bolshevik from, in a sense, being hoist by your own petard. So The Thule Society, I don’t think is a huge part of this program, but it is a part of it, and it’s all an effort to try to keep control. And that’s why the army is financing them. That’s even why the Army at some point then supplies them with its own propagandists. So, The Thule Society begins to create under Sebottendorff leadership, what he called, the Rings of Thule. And these are satellite organizations that aren’t the society as though, but they’re controlled and inspired by it. And one of those is a thing called the German Workers Party.

(01:40:14)
And the German Workers Party, again, is local. It’s not large, it’s not terribly influential, but what does it aspire to be? It aspires to be a party that will bring German workers away from the seductive influence of the Bolsheviks and into a more patriotic position. And, the way that I describe this is that it’s not an anti-communist organization, it’s a counter-communist organization. So you don’t create something which completely opposes it, you create something which mimics it, which is ultimately what the German Workers Party will become, is the National Socialist German Workers Party, known as that term, socialist. And that is, in my view, what Nazism is from the beginning. It is a counter-communist movement.
Lex Fridman
(01:41:13)
And by the way, for people who don’t know, the National Socialist German Workers Party is also known as the Nazi Party. So how did this evolution happen from that complicated little interplay? We should also say that a guy named Adolf Hitler is in the army at this time.
Rick Spence
(01:41:33)
Yes.
Lex Fridman
(01:41:34)
Man.
Rick Spence
(01:41:35)
Well, he’s going to come into this, because remember, I said the Army was going to supply its own propagandists to help the German Workers Party and The Thule Society do their work. And the propagandists they supply them with is a man who the Army trains, sends to classes to learn the art of public speaking and propaganda. And that fellow is Corporal Adolf Hitler.
Lex Fridman
(01:42:01)
So how does Adolf Hitler connect with the German Workers Party?
Rick Spence
(01:42:06)
Well, he’d been in the Army during the war. The only regular job that he’d ever had, liked it. So you often get the view is that, well, at the end of the war, he joined millions of other German soldiers who didn’t have… No, no, he stays in the army. He stays in the Army until 1921. He’s on the Army payroll at the very time in which he has helped them to set this up. What appears to have happened is this, Sebottendorff had organized The Thule Society, they had tried to oppose. There’s actually a brief period of time in which the communists actually take over Munich, the Bavarian Soviet Republic, which doesn’t last very long. And eventually, the Army volunteers to put this down. While that’s going on by the way, Hitler is actually sitting in the barracks in Munich wearing a red armband, because he is technically part of the soldiers who have got over to the Bavarian Soviet Republic.

(01:43:09)
He seems to have had flexible interests in this case. So, once order is restored, so to speak, the army comes in and decide that, “Well, one of the things we need? We need to have people who can lecture soldiers on patriotic topics.” And so, there is a particular captain by the name of Karl Mayer who spots Hitler. He later describes him as a stray dog looking for a master. Hitler has a knack for public speaking. Other soldiers will listen to him. Some people can do that, some people can’t. Mayer decides that he’s a good candidate for further training. And so, yes, they bring him in. They turn him into a, what’s called, a [foreign language 01:43:56], a liaison man. He’s an army propagandist.

(01:44:03)
And then, you’ve got this little outfit called the German Workers Party. And essentially what happens is that Hitler is sent in to take over leadership of that, which is what happens. He shows up, he attends a meeting, there are 50 people there. By the way, the topic of the first meeting he’s at, is how and why capitalism should be abolished, which is not what you might, well, expect. Because remember, the German Workers Party is trying to cast itself as a counter Bolshevism. So it’s not saying that capitalism is great, which is important. No, capitalism is evil. We agree upon that. We just agree it has to be destroyed from a nationalist point of view, as opposed from some strange internationalist point of view. So Hitler is essentially, as I see it, sent in by the Army as their trained man to assume leadership within this small party and to use it-
Rick Spence
(01:45:00)
To assume leadership within this small party and to use it for the army’s patriotic propaganda campaign. And is a season doing so even to the name change, to the National Socialist or German Workers Party. I mean, really what sounds more red than that?
Lex Fridman
(01:45:21)
So the interesting thing here is from where did anti-Semitism seep into this whole thing? It seems like the way they try to formulate counter-Marxism is by saying the problem with capitalism and the problem with Marxism is that it’s really Judeo-capitalism and, “Judeo-Bolshevism”. From where did that ideology seep in?
Rick Spence
(01:45:50)
Well, that’s a huge topic. Where does anti-Semitism come from? Let’s start with that term itself. A term which I have really grown increasingly to dislike because it doesn’t actually say what it means. Anti-Semitism is anti-Jewism. That’s all it is. I’m not sure whether there has ever existed a person who hated Jews, Arabs, and Maltese equally. Okay. That’s kind of hard to imagine. I don’t know. But that’s technically what that would mean because let’s face it, most Semites are Arabs. So if you’re an anti-Semite, then you don’t seem to distinguish Jews from Arabs. It makes no sense. The origin of the term is invented by, guess what? An anti-Semite. Okay. A guy in the 1870s, a German journalist by the name of Wilhelm Marr, who is, wouldn’t you know it part Jewish himself. And who decides that you really needed a better term than Judenhass, Jew hate, which was the term that, because that just sounds so inelegant, doesn’t it?

(01:47:05)
Okay. What do you want to call yourself a Jew-hater or an anti-Semite? See, anti-Semitism, it’s got that ism part of the end of it, which means it’s a system of belief. Anything that has an ism must somehow be scientific and important. It’s all part of the 19th century obsession with trying to bring science into something, one or the other. So we’re going to get rid of Jew-hate, and we’re going to turn it into anti-Semitism. And we’re only going to be talking about Jews, but we’ll never actually say that. And somehow the invention of a Jew-hater to disguise the fact that he’s a Jew-hater, even though he’s partly Jewish by inventing the term anti-Semitism worked because everybody has bought it and repeated it ever since. So I don’t know, maybe just because anti-Jewism would just be, is it too direct in some way? Do we have difficulty confronting actually what it is that we’re talking about?
Lex Fridman
(01:48:03)
I do wish terms were a little bit more direct and self-explanatory. Yeah, Jew-hate is a better term.
Rick Spence
(01:48:09)
Well, the question then comes, what exactly do you hate about Jews? And a lot of this has to do with, if you go back prior to the 19th century, if Jews were hated, they were hated for religious reasons. In Christian Europe, they were hated because they weren’t Christians and they existed as the only kind of significant religious minority. But other than that, they tended to live separately. They had little economic influence. Jews tended to live in shtetls in the East, ghettos elsewhere. They were, some were involved in banking and business, but they sort of remained segregated from much of society.

(01:48:55)
That changes when you get to the 19th century and with what’s called Jewish emancipation. And that means that between about 1800 and 1850, most European countries drop the various legal or social restrictions against Jews. They are assimilated into the general society. So ideally, you stop being a German Jew and you become a Jewish German. Those are two very different important concepts. And what that does, of course, is that it opens up the professions, business world, elsewhere. So Jews move who had been largely within those realms to begin with, they already had a good deal of experience in banking business, and they move into those areas and professions and become quite visible.

(01:49:48)
And that’s what then creates anti-Semitism because in some way that is seen as part of the changes that have taken place. And there are a lot of things going on here. Part of it has to do with the kind of wrenching social and economic changes that took place with industrialization. So one of the things to keep in mind is that in the process of industrialization, just like today, whole classes of people were made extinct economically, craftsmen, for instance. So when factories came along and began to produce things with machines, all the craftspeople who had made those things previously are now unemployed or go to work as wage labor in factories. So there are winners and losers in industrialization. And what people saw in Germany and elsewhere is that among this new sort of rising capitalist elite among these new professions, among the bureaucrats that are coming out of these burgeoning states, they were visibly a fair number of Jews.

(01:51:05)
So in some way, the rise of Jews in the minds of many people were connected to all of the other bad things that were going on. The world was changing in a way we don’t like. And seemingly the Jews are prospering while I am not, and that was true in Germany and elsewhere, Jews because highly visible in the professions, they became very visible in banking. They became visible in legal profession. They became visible in the medical profession. And those are people that a lot of people would come in contact with, bankers, lawyers, and doctors. They were not the majority there, but vastly overrepresented in terms of the general population and especially within the cities. So in that sense, the roots of anti-Semitism to me is that Jews in Germany and Elsewhere and not just in Germany by any means, France, Britain, everywhere else became identified with the bad changes that were taking place.

(01:52:10)
But you also found that Jews were not only prominent among capitalists, they were also prominent in the socialist movement as well. So one of the things you could look around if we returned to Germany in 1919 in the aftermath of World War I, and you look around in Bavaria or elsewhere, you tend to find that there are a lot of Jews in visible positions on the German left. Rosa Luxemburg is but one example of that, Eugen Levine, some of them came in from Russia. When the Soviets send a representative to Germany in this period, it’s Karl Radek, a Jew. So it wasn’t difficult to exploit that, to argue that just as the ranks of capitalism was full of Jews, the ranks of Bolshevism or of the revolutionary left, were full of Jews. Because you could easily go around and distinguish a great many of them.

(01:53:16)
Again, they don’t have to be the majority, they just have to be numerous, prominent, and visible, which they were. So this provided you a, in the case of the propaganda of the German army, the type of stuff that Hitler was spewed out. They could put all the anti-capitalist rhetoric in there, wanted to. The army was never going to overthrow capitalism, and the capitalists knew they weren’t going to do it. So go ahead, talk shit about us. We don’t really care. That’s not going to, because we know that the army would prevent that from happening. The way to then undermine the real enemy, it was a scene. The revolutionary left was to point out the Jewish influence there. I mean, look at Russia. Well, Lenin is up, Trotsky, there he is. Look, there’s a Jew. There’s one. Radek is a Jew. It wasn’t hard to find them in that regard.

Protocols of the Elders of Zion

Lex Fridman
(01:54:11)
You gave a lecture on the Protocols of the Elders of Zion. It’s widely considered to be the most influential work of anti-Semitism ever perhaps. Can you describe this text?
Rick Spence
(01:54:25)
Well, the Protocols of the Learned Elders of Zion is probably one of the most troublesome and destructive works of literature that has ever emerged. And yet its origins remain obscure. So you get a whole variety of stories about where it came from. So the one story that is often is that it was the work of the Okhrana, the Russian Secret police. And in particular, it was all crafted in 1904 and 1905 in Paris. There’s a whole description of Pyotr Rachkovsky who was the, supposedly the chief of the Okhrana at the time, was the man behind it, another fellow by the name of Matvei Golovinski was the drafter of it. And that they had this document written by a French political writer from some decades back called Dialogue in Hell Between Machiavelli and Montesquieu, which they were then adapting. Usually it’s argued that they plagiarized it into the protocols.

(01:55:46)
And none of that is really true. I mean, the first part about it is that at the time this supposedly took place, Rachkovsky wasn’t working for the Okhrana, he had been fired and he wasn’t in Paris. And the whole situation, which is described couldn’t have taken place because the people who did it weren’t there. It’s a story, but it provides a kind of explanation for it. So the protocols emerge, so you always have to go back. This is one of the things that I have found always useful in research, is go back to the beginning, find the first place this is mentioned, or the first version, or the first iteration. Where does it start?

(01:56:37)
So you go back to Saint Petersburg, Russia around 1903. There is a small right wing anti-Semitic newspaper published there called Znamya, banner. And it publishes in a kind of serial form a work doesn’t credit with any original author. And this is the first version of the Protocols of the Learned Elders of Zion. But what it’s actually describing is a Judeo-Masonic plot to rule the world. Those two terms are always combined together. And I think in the earlier version, there’s far more mentions of Freemasons than there are Jews.

(01:57:26)
And the publisher of Znamya is closely connected to a thing called the Union of Russian People. The Union Russian Men, which was ostensibly existed to defend the empire against subversion and particularly against what it thought was Jewish subversion when they also argued that the prominence of Jews in revolutionary movements somehow proved that this was in some way a Jewish revolution. But again, this is not a mainstream newspaper. It’s not appealing to a mainstream population. Very few people saw it, but this is where it appears. Now keep in mind that’s two or three years before it’s usually said to have been written, or the other version is that there’s this crazy priest by the name of Sergei Nilus, and he wrote it or actually appended it as an appendix to his work in 1905. Now it was around before that. So Nilus didn’t create it. It wasn’t drafted in Paris in 1904 and 1905. It was serialized in an obscure right wing Russian newspaper, 1903.
Lex Fridman
(01:58:34)
And by the way, we should say that these are 24 protocols.
Rick Spence
(01:58:41)
Well, it varies.
Lex Fridman
(01:58:42)
It varies.
Rick Spence
(01:58:43)
Yeah.
Lex Fridman
(01:58:44)
That are, I guess supposed to be meeting notes about the supposed cabal where the Jews and Freemasons are planning together a world domination. But it’s like meeting notes, right?
Rick Spence
(01:58:59)
Protocol, which are Russian term basically for notes of a meeting.
Lex Fridman
(01:59:04)
Yeah.
Rick Spence
(01:59:05)
Well, it’s notes of a meeting. These are the goofiest things I’ve ever seen because what you’ve got here, it’s not notes. No one takes notes from a meeting that way. What you’ve got is the exposition of a Bond villain. All right. It’s all of this, boy, all them, we’re going to do this. And then the last thing you want to do is lay out, if you’ve got a plan for world domination, my suggestion would be don’t write it down. So it’s not notes of a meeting. It’s again, it’s another sort of narrative or story that’s being told. It bears no resemblance to the Dialogue in Hell Between Machiavelli and Montesquieu. But what it is, the best thing, it’s not particularly readable in some ways. There was an Italian writer by the name of Cesare Michelis, who wrote a book translated in English called The Non-Existent Manuscript. And what it is, is that he takes the different versions starting with the 1902, 1903 versions and looks through the other ones, and he tries to, in the process, to reconstruct what he thinks the original might have been.

(02:00:20)
But the other thing he does, which was fascinating to me, is that he takes this whole sort of initial text and in bold type he indicates the paragraphs, but more often sentences or phrases that appear to be identical from the Joly work and they’re just scattered throughout it. There’s no particular rhyme or reason to it. You don’t plagiarize that way. I mean, who does that? It’s sentence here, sentence there, which has led to a peculiar theory of mine, which of course I will have to expound upon, which is that I think that the original author of the protocols was the same Maurice Joly. I think what someone stumbled across was a work which he wrote and never published, and which he just drew. It’s exactly what someone would do working from your own kind of material, because I’ve written things and then taken what I’ve written and then sort of repackaged that into something else.
Lex Fridman
(02:01:31)
Sentence here, sentence there.
Rick Spence
(02:01:32)
Yeah. And the same sort of thing comes out, only sort of bits and pieces of it remain. So why would Joly have done that? Joly was, we’re talking about a man whose career basically spanned the 1850s to 1870s. He’s an obscure figure. I’m not even totally sure he existed, I mean, but it’s one of those things you go looking for him.
Lex Fridman
(02:01:58)
I love that you’re a scholar of people that just kind of emerge out of the darkness.
Rick Spence
(02:02:03)
They just come from nowhere.
Lex Fridman
(02:02:05)
Yeah. And there’s the Okhrana there also. And we should also say this was, I guess the original would be written. I mean, what’s the language of the original? Russian?
Rick Spence
(02:02:12)
Russian. But my hunch is that that’s adopted from a French version. First of all, they’re constantly harping on Freemasons, which wasn’t nearly as a big idea as there. If you go back to France in the 1890s, there’s some big scandals. Well, there’s the Dreyfus scandal. We got that. All right. Where you’ve got a Jewish officer on trial for being a traitor. All right. So that was [inaudible 02:02:34]. So you bring in the whole Jewish element. Jews is disloyal Dreyfus case 1894. Earlier you had the Panama scandal, which was this huge investment scandal when the Panama Canal company in Paris collapsed. And again many of the major players in that were Jewish financiers. And then you’ve got the Taxil hoax.

(02:02:59)
So the Taxil hoax was the work of this guy. His real name was I think Jogand-Pages. He was kind of a French journalist. I don’t know. He started out writing porn. So I mean, he wrote things like Sex Lives of the Popes and the Erotic Bible and various things of that kind. He was a Catholic, broke with the Catholic Church, wrote bad stuff about the Popes, and apparently became a Freemason for a while, and then supposedly recanted his evil ways, went back to the church. And then under the name Leo Taxil began writing these whole series of articles, basically arguing that there was a Masonic-Satanic conspiracy run, by the way, by an American, Albert Pike. And this also included child sacrifice. It’s got Pizzagate and it is as well by a high priestess Diana Vaughan.

(02:03:56)
And so there’s like child sacrifice, weird Robie, Bohemian Grove stuff, and the Freemasons or devil worshipers going back to the Knights Templars. And so there’s a thing called the Devil in the 19th Century and the Secrets of Freemasonry, and this became a bestseller in France. So France is just obsessed with all these kinds of conspiracies. So evil, Satanic, Freemasons, evil, Jewish financiers, Dreyfus. This, this is the brew where all of this come. So want to figure out how Freemasons and Jews get connected together? France is the place where this happens.

(02:04:36)
Now, Taxil or Jogand-Pages eventually pulls another interesting thing in this around 1897, critics argue that he’s making this stuff up and demand that he present Diana Vaughan, suppose Satanic, high priestess toddler killer. And he says, oh, we’re going to have a press conference. She’ll appear and say all of this stuff as she returns to the church and possibly becomes a nun. And so people show up, high figures in the Catholic Church shows up, and he does. No Diana Vaughan and Jogand-Pages goes, it’s all a hoax. I made it up. You’re all a bunch of idiots for believing it. Okay. You, you members of the church, especially just what gullible morons you are, and that’s it. He confesses.

(02:05:21)
To this day however, you will find people who will insist that it’s actually true because they desperately want it to be true. But this is, I think the milieu that, I like that word apparently that this comes out of, and this is this whole kind of unhealthy mix. So France to me is the only place that in the decade preceding it, that something like this would be concocted. So it was either created by some sort of unknown person there. But I still think that even though he dies in like 1879, that in Maurice Joly’s troubled career, he went from being an opponent of French Emperor, Napoleon III, which is what the whole dialogues was written against.

(02:06:17)
And then he was for a time, a close political ally of a French politician by the name of Adolphe Cremieux. So Adolphe Cremieux, well, what’s he got going for him? Well, he was kind of a radical politician. He was an opponent of Napoleon III. He was a Freemason. Oh, and he was Jewish. In fact, at one point, I think he was actually the head, both of the Scottish right in France, and an important figure in the Alliance Israélite, the Jewish organization in France. So he was publicly very prominently Jewish and Masonic. So someone else who would’ve linked them together.

(02:07:06)
Joly, as he did with virtually everyone, this was a guy whose life largely consisted of dual threats and fistfights. So he gets angry at Cremieux, and it’s exactly the type of thing that he might write to vent his spleen about it. But he died, probably a suicide, that’s kind of difficult to tell in obscurity. His son seems to have inherited most of his literary works, and his son became a journalist, worked for newspapers in France in the 1890s, but was also associated with some people on the fringes of the Okhrana or the Russian press in France. So one of the little things that had happened by this time is that France and Russia had become allies, even though their political systems were completely incompatible.

(02:08:16)
And so the Russians were using money to subsidize French newspapers that were championing the alliance between the two. Russian meddling. Okay. Now they’re just paying to have the right kind of newspapers come out. So there’s this whole connection between the kind of Russian journalistic world and the French journalistic world and all of these scandals which are going on, and Joly’s son and then 10 years down the road, this thing pops up in a newspaper in Saint Petersburg. That’s where I think the origins lay.
Lex Fridman
(02:08:57)
Why do you think it took off? Why do you think it grabbed a large number of people’s imaginations and even after it was shown to be not actually what it’s supposed to be, people still believe it’s real?
Rick Spence
(02:09:14)
Well, it doesn’t take off immediately. Okay. Never receives any kind of wide, I mean, nobody much reads the first edition of it. It keeps getting, there is something like 18 or 19 different versions as it goes through. I mean, people leave this protocol out or leave another one. As time goes on, there’s more and more emphasis on Jews and less and less on Freemasons. So it’s sort of, and the whole thing could have begun as an anti-Masonic tract.

(02:09:46)
I mean, you could leave Jews out of it entirely and just turn it into a Masonic plot to rule the world, but let’s just throw them in as well since the two things are already being combined elsewhere. It doesn’t become a big deal until really after the first World War because the initial versions of it are all in Russian. And let’s face it, well, that’s widely read in Russia. It’s not much read anywhere else. It’s a different alphabet. Nobody can even see what it means. So it has no particular influence outside of Russia. But then you get to 1919 and you get all these different versions of it. So suddenly you get two English versions in the US, another English version in Britain, a German edition, a French edition, a Dutch edition. Everybody is coming up with these things. So it’s not until in the immediate aftermath of the first World War that this metastasizes and it begins to show up in all of these different foreign editions.

(02:10:49)
And I think that it just has to do with the changes that have taken place during the war. One of the things that people began looking for was that why was there a war? And we’ve just had this whole disastrous war and the world has been turned upside down. So there has to be some kind of explanation for that. I don’t know. And one of the things this offered to, see there’s this evil plan, there’s this evil plan that has been put into motion, and this could possibly explain what’s taking place. The reason with the protocols were, I think widely bought then and why they still are in many ways is the same reason that the Taxil hoax I was talking about was. Because it told a story that people wanted to believe.

(02:11:37)
So in France in the 1890s, there was widespread suspicion of Freemasons. It was seen as a somewhat sinister, secretive organization, certainly secretive. And there was also the same sort of generalized prejudices about Jews, clannish distinct, too much influence, all of the things that went on. So it was sort of easy to combined those two things together. And even though Taxil admits it was a hoax, there were those who argued that this is just too, it’s too accurate. It describes things to completely to be a hoax. And that you get the same arguments, in fact, I’ve heard the same arguments with the protocol. I don’t even buy this as an example of plagiarism, because you can’t actually prove what’s being plagiarized in any sense. To me, the protocols are a prime example of what I call a turd on a plate. These things crop up. I have to explain that now.
Lex Fridman
(02:12:47)
Yeah, please.
Rick Spence
(02:12:47)
But afterward. What is a turd on a plate? Well, a turd on a plate is a turd on a plate. Suppose you come in and there’s a plate sitting on the table and there’s a turd on it. Now the first thing you’re going to wonder, is that a turd? Is it a human turd? Where did it come from? Why would someone poop on a plate? There are all these questions that come to mind. It makes no sense, but that’s what you come, it’s just there. Right. I don’t know where it came from. I don’t know why. But there’s a turd on a plate, and that’s what the protocols, that they’re just there.
Lex Fridman
(02:13:24)
But the reality is just like with a turd on a plate, you take a picture of that in modern day and it becomes a meme, becomes viral and becomes a joke on all social media, and now it’s viewed by tens of millions of people or whatever. It becomes popular. So wherever the turd came from, it did captivate the imagination.
Rick Spence
(02:13:43)
Yeah.
Lex Fridman
(02:13:44)
It did speak to something,
Rick Spence
(02:13:45)
But does it seemed to provide an explanation?
Lex Fridman
(02:13:48)
Can you just speak to Jew hatred? Is it just an accident of history? Why was it the Jews versus the Freemasons? Is it the collective mind searching for small group to blame for the pains of civilization and then Jews just happened to be the thing that was selected at that moment in history?
Rick Spence
(02:14:15)
It goes all the way back to the Greeks. Let’s blame them. So one of the first occasions you find the idea that Jews are a distinct, mean-spirited, nasty people goes back to, and a Greco-historian named Manetho. This is around, I think 300 B.C. early, can’t even rope the Romans into this one. So Manetho is trying to write a history of the dynasties of Egypt. I think his history of dynasties of Egypt still is one of the basic works in this. But he tells this whole story, which essentially describes the kind of first blood libels, that the Jews to celebrate their various religious holidays would capture Greeks and fatten them up in the basement and then slaughter them and eat them or drain their blood or do something. Yeah. It’s just the sort of earlier version of that kind. Also, I think it repeats the sort of Egyptian version of the Exodus out of Egypt, which is quite different than the biblical version. In this case, the Egyptian, they stole all the stuff out of the Egyptian’s houses and ran off into the desert.
Lex Fridman
(02:15:45)
The Jews stole all the stuff and ran off?
Rick Spence
(02:15:47)
Yeah, Hebrews. Hebrews robbed the Egyptians. They were taken in. We took them in and sheltered them, gave them jobs, and then they stole all the jewelry and ran away. We didn’t even chase them. We were glad to see them gone. So it’s a different narrative on that story, but it essentially portrays the Jews as being hostile, that they don’t like other people, they’re contemptuous of other people’s religions, the rest of it. And see, the Greeks tended to think of themselves as being extremely cosmopolitan. Now, the Greeks run across people worshiping other gods. They go, oh, well those are just our gods under different names. Okay. Everything was sort of adjusted into their landscape. So you end up with that kind of hostility, which was there at the time. And that was probably influenced also by some of these earlier rebellions that had taken place in Egypt.

(02:16:53)
During the Roman period, you not only have the Judean Rebellion in 70 A.D., but you have a couple of other uprisings in North Africa, and they were very bloody affairs. And in some cases, Jews began massacring other people around them. They start killing the Greeks and the Greeks start killing them. So there was a fair amount of, from that periodonic, a certain amount of bad blood of mutual contempt between Greeks or between Hellenes, between the people who became Hellenized as the Romans would be and the Jews. And the Romans also seems to have developed much of that idea. They considered Judea as being a horrible place to have to govern, inhabited by a stubborn, obnoxious people, not well-liked.

(02:17:48)
So that’s really where you see the earliest version of that. And the reasons for it would be complicated, but you could say is that going back to Manetho and to the Roman period, Jews, Judeans frequently experienced difficulties, conflicts with other people living around them. And part of that probably had to do with the diaspora, which was the movement. Well, you get the idea. The Romans came in and kicked everybody out, which they didn’t. Jews had been leaving Judea since it was a poor limited area. And moving into areas like North Africa, Egypt, Cyrenaica, all the way into Southern France. They moved widely around the Roman Empire. So that sense of both distinctness and hostility existed since ancient times.

(02:18:48)
So it wasn’t just, the attitude of the church towards Jews was mixed by… Well, one of the ideas, of course, is that at the end of time, just before the second coming, one of the signs, how are we going to know that Jesus is going to return and the world is going to end? Well, the Jews will all convert. There will be a mass conversion. They’ll sort of see the light. Now, so there have to be Jews around to do that, or we won’t. It’s like a canary in a coal mine. You have to have them there to tip it off. So that was one of the arguments as to why, within the church as to why Jews would not be forcibly converted beyond the fact that it’s just kind of bad policy to forcibly convert people because you don’t know whether it’s sincere, but they need to be preserved as a kind of artifact, which will then redeem itself at the end of time. It’s not something which is encouraged. It predates Christianity, and then Christianity, of course, in its own way, just sort of…
Rick Spence
(02:20:00)
… of course, in its own way, just plagiarizes the whole Jewish thing, doesn’t it? I mean, I hesitate to use that term, but that’s what you do. It’s just like, “Well, we’re the Jews now. You used to have a unique relationship with God, but now it’s been passed over to us. Thanks for the Bible.” I can remember that on my mom’s side, I was periodically exposed to Sunday school, and pretty much the Old Testament was always presented as if somehow it was the history of, for lack of better term, Europeans in some way. It was a Christian history. It was all the prequel to that. First, the term Hebrew was always used, never Jews. So the ancient Hebrews, and somehow the Hebrews just became the Christians, and I don’t know, the Jews, they didn’t get a memo or something.
Lex Fridman
(02:20:59)
So it’s basically like, Christianity, the prequel, is the Old Testament.
Rick Spence
(02:21:05)
Well, they just take over. “We have the special dispensation now. Thank you very much.” You’re an artifact.
Lex Fridman
(02:21:13)
So it’s interesting. So this whole narrative that I would say is a viral meme started, as you described, in 300 BC. It just carried on in various forms and morphed itself and arrived after the Industrial Revolution in a new form to the 19th and 20th century, and then somehow captivated everybody’s imagination.
Rick Spence
(02:21:41)
I think that modern antisemitism is very much a creation of the modern world and the Industrial Revolution. It’s largely a creation of Jewish emancipation. It’s the nasty flip side of that. All of the restrictions, they’re thrown off, but now also you become the focus of much more attention than what you had before. Prior to that, you had the ghettoization, which worked both ways. I mean, there were rabbis who praised the ghettos as a protection of Jews against the outside world, because inside we can live our life as we wish and we’re unmolested. The great fear is that if we were absorbed into this larger world, we’ll lose our identity. That sort of question comes up in the 18th century in things like the Haskalah movement in Germany, because the German Jews were always at the cutting edge of assimilation and modernity. And Moses Mendelssohn was an example of that, arguing that we just need to become Germans. So as much as possible, synagogues should look like Lutheran churches. Things should be given in good German. We need to become Jewish Germans. We don’t want to become a group of people who are apart in that way, and that has created great tensions ever since.

(02:23:29)
One of the essential points that seems to me in antisemitism, anti-Jew-ism is that all the Jews are in this together. Isn’t that one of the things? Okay. They’re always talking about as if they’re collective. Jews this, Jews that as if it’s a single, undifferentiated mass of people who all move and speak in the same way. From my personal experience, not being Jewish, it’s incredibly diverse in many ways, really. One of the things that anti-Semitism proposes is a continuity or a singularity of Jewish identity that never existed.
Lex Fridman
(02:24:10)
Just like you said, in one hand, there’s a good story, in the other hand is the truth, and oftentimes the good story wins out. And there’s something about the idea that there’s a cabal of people, whatever they are, in this case, our discussion is Jews seeking world domination, controlling everybody is somehow a compelling story. It gives us a direction of a people to fight, of a people to hate on which we project our pain, because life is difficult. Life for most is full of suffering. And so we channel that suffering into hatred towards the other.

(02:24:48)
Maybe if you can just zoom out, what do you, from this particular discussion, learn about human nature that we pick the other in this way? We divide each other up in groups and then construct stories. And we like constructing those stories, and they become really viral and sexy to us. And then we use those stories to channel our hatred towards the other.
Rick Spence
(02:25:20)
Well, yeah. Jews aren’t the only recipient of that. I mean, anytime you hear people talking about Jews this or that, white people this or that, black people this or that, Asians this or that, where they’re an undifferentiated mass, who apparently all share something in common, well, then nobody’s really thinking. And the other thing you’ll find is that people who will express those views when pressed will argue that, “Oh, well, if they actually know anybody from those groups, those are okay.” It’s like Nazis. They go, “This is an okay Jew. They’re all right.” They would always be constantly making exceptions in one form. What they actually met an actual human being, and they seemed to be fairly normal, well, they were okay. So what it was that they hated weren’t actual people for the most part, it was just this golliwog vision that they had of them. You’re not even talking about real people.

(02:26:20)
I don’t know. What does that tell you about human nature? Well, okay, in 70 odd years, what have I learned about my fellow creatures? One, I don’t actually understand them any better than I ever did. In fact, less so. I would say this, when I was 17, I thought I had the world much more figured out than I do now. Completely deluded. But it seemed to make much more sense, and I could categorize things. Basic take upon human beings, most people, most of the time are polite, cooperative and kind until they’re not. And the exact tipping point and moment in which they go from one to the other is unpredictable.

Charles Manson

Lex Fridman
(02:27:14)
God, that’s brilliantly put. Speaking of the tipping point, you gave a series of lectures on murderers, crimes in the 20th century. One of the crimes that you described is the Manson family murders, and that combines a lot of the elements of what we’ve been talking about and a lot of the elements of the human nature that you just described. So can you just tell the story at a high level as you understand it?
Rick Spence
(02:27:41)
The Manson family. Well, you begin with Charles Manson, who’s the key element in this, and Charles Manson for most of his life up until the time that he’s around 33, is an unexceptional, petty criminal. In and out of prison, reform school from an early age, not really associated with violent crimes. He did stuff like steal cars, write bad checks, became an unsuccessful pimp and drug dealer. So around 1967, he gets out of his latest stint in federal lockup in Terminal Island near Los Angeles, California. By that time, he has learned how to play the guitar, has ambitions to become a musician, and also has proclaimed himself a Scientologist, not that he ever seems to have practiced, but that’s what he would claim that he was. Self-educated himself in prison to a certain degree. So when he gets out of prison in ’67, he was a model prisoner. He behaved himself and seemed… You can imagine his life is going in a completely different direction. And here, again, I’m going to say something good about Charles Manson, which is that he actually was a decent singer. If you really listened to some of the stuff he did… He’s not a great singer, but other people got recording contracts with less talent than he had, and he could play a guitar. The Beach Boys actually do record one of his songs without him.
Lex Fridman
(02:29:20)
How would you evaluate Hitler’s painting compared to Charles Manson’s-
Rick Spence
(02:29:24)
Well, you’re supposed to say it’s terrible. It looks average to me.
Lex Fridman
(02:29:28)
Yeah, it’s a landscape.
Rick Spence
(02:29:30)
If you didn’t know it was Hitler, I don’t know what people would say about it.
Lex Fridman
(02:29:38)
I’m sorry for the distraction.
Rick Spence
(02:29:41)
He’s an average painter. That’s what it was. It’s nothing like crazy, genocidal, maniac paintings. You don’t really have those. So Manson, he could have done that. He made certain inroads into the music industry, and if he hadn’t been such a weirdo, he might’ve gotten further with it. But his life could have taken a different turn. So this is one of the questions I have. Where did a guy who’s an unexceptional career petty criminal suddenly emerge into some sort of criminal mastermind, a Svengali who can bend all of these people to his will and get them to go out and commit murder? That’s a real shift that you have.

(02:30:23)
So the first thing that could tell you that something odd is going on is he gets out of prison in LA County and he’s on parole. Parolees are supposed to have a job, not supposed to leave the jurisdiction of their parole. He heads straight for the Bay Area, violates parole right off the bat. Two weeks later, he drifts into the parole office in the Bay Area, whereupon he should have been arrested and sent back to Terminal Island, but instead they just assign him a [inaudible 02:30:57]. I don’t know, maybe things were easier then in some way. So he gets assigned a parole officer, Michael Smith. Michael Smith is initially handling a number of parolees. But after a while, once he takes on Manson, he only has one parolee he’s supervising, Charlie Manson, which is odd. Then you also find out that Michael Smith, in addition to being a parole officer, is a graduate student at the University of California studying group dynamics, especially the influence of drugs on gangs in groups. He’s also connected to the Hayett Ashbury Free Clinic, which is a place where the influence of… Because Hayett Ashbury had lots of drugs and lots of groups. So Charlie Manson never gets a regular job, hangs around with young girls, ex-cons, engages in criminal activity. He is repeatedly arrested, but nothing ever sticks for the next couple of years.

(02:32:04)
Who gets that type of thing? Who gets a get out of jail free card? Informants. So here is what? Again, this is speculation, but Manson at some point after he got out of prison is getting this treatment because he is recruited as a confidential informant.
Lex Fridman
(02:32:28)
For who?
Rick Spence
(02:32:29)
For who? That’s the interesting question. So, probably not for any local police departments. My best suspicion is probably the Federal Bureau of Narcotics, precursor to the DEA. Federal parolee, federal parole officer, graduate student in drugs and group dynamics. And eventually with permission, he goes back down to LA. And what is he part of when he’s there? Well, he’s on the fringes of the music industry. The Wilsons and elsewhere, which also brings him to the fringes of the film industry. So one of the things, if you’re looking in terms of Hollywood music industry elites in the flow of… Oh, and he’s also dealing in drugs and girls. So an early version of Jeffrey Epstein. Manson attracted lots of underage runaways and trained them, used them, also associating with biker gangs who produced the drugs, et cetera.

(02:33:41)
So that’s part of it. He’s an informant in the movement of drugs basically within the film and music industries. And he’s given pretty much a free rein at that point. What then happens in August of 1969 is that there are these murders. First, Sharon Tate and her friends in Cielo Drive. I think everybody has probably pretty much heard that story before. And of course, the question is why Cielo Drive? Why Sharon, Tate, Frykowski and the rest of them? Manson was familiar with the place. He had been there before. Members of the family had been there before, so he knew where it was. It wasn’t an easy place to find. The original house is no longer there, but the same property and a house is built there. And if you didn’t know where it was… It’s not some place, “Let’s just go for a drive in the Hollywood Hills and murder people in a house.” Well, that isn’t the one that you would come across. There are lots of connections there. Wojciech Frykowski was one of the people killed at the Cielo Drive house, was involved in drug dealing. That’s a possible connection between the two, probably a fairly likely one. Probably not unfortunate Sharon Tate at all. She was probably in the wrong place at the wrong time. Her husband might’ve been, you never know.

(02:35:06)
And then the next night after the slaughter there… Which by the way, Manson is not at. So this is one of the interesting things about it is, Charles Manson doesn’t kill any of these people. His crime is supposedly ordering the killings to be done. He supposedly thought that the killings at the Tate house were sloppy, and he was going to give everybody a crash course in how you apparently commit seemingly random murders. So the next night he takes a group of people over to the LaBianca’s house in a different section of LA. You’ve got Leno, Rosemary LaBianca, the guy is a grocer. His wife runs a dress shop, upper middle class, and they’re bound and gagged and hacked to death. As at the Tate residence, various things like piggy are written, various messages in blood, things that are supposed to look like cat’s paws. Because one of the groups trying to be framed for this was the idea was the Black Panthers.

(02:36:10)
So the general story that comes out in the subsequent trial is that this was all a part of something called Helter Skelter, which supposedly was an idea that… That sounds like a Beatles song. That’s where he got it from. He thought the Beatles were talking to him through their music and that there was going to be an apocalyptic race war, and this was all part of a plan to set this off. So this is why the Black Panthers were trying to be implicated in this. Although, how it was supposed to do that is never really explained.

(02:36:46)
Here is what I think was really happening, what really happened and how I think it fits together. Before Sharon Tate and her friends or the LaBiancas were killed, there was a murder by members of the family of some of the same people involved in the later killings of a musician, drug manufacturer by the name of Gary Hinman. So Manson, again was involved in the drug trade, and Hinman made them. He was a cook, basically, and he brewed them up in his basement, sold the drugs to Manson, who sold them to biker gangs like the Straight Satans, which was one of the groups that he used, and they distributed them elsewhere. Well, one day, the Straight Satans show up and complain that the last batch of meth or whatever it was that they got from Manson, had made some of their brothers very, very ill, and they were quite unhappy about that, and they wanted their $2,000 back. Manson had gotten those drugs from Gary Hinman. So he is unhappy, and he sends Bobby Beausoleil, and a couple of the girls over to Hinman’s place to get the money from him. As the story is later relayed, I think by Susan Atkins, Hinman denied that there was anything wrong with his drugs and refused to pay up, which led to a interrogation torture session in which he was killed.

(02:38:22)
And the idea was here, what are we going to do with that? Well, one of the other groups that Hinman had sold drugs to were, guess what? People associated with the Black Panthers. So we’ll leave these things up and they will do it. So it’s Bobby Beausoleil who then takes Hinman’s car and decides to drive it up the coast, by the way, with a bloody knife with Hinman’s blood and hair on it, and blood on the seats in the car, and then he pulls it off the road and decides to sleep it off, and he gets busted. So, find Hinman’s body, find Beausoleil in Hinman’s car with a bloody knife with him. He gets arrested. So Beausoleil was very popular with some of the girls. There’s consternation in the family that Bobby has been arrested. So how can we possibly get Bobby out of jail? Copycat killings. So if we go kill more people and we make it look the same, then see, Bobby couldn’t possibly have done it. Now, see, he just borrowed the car. Okay, he stole the car, but the knife was already in… He didn’t have anything to do with this. So that to me makes the most sense out of what followed.
Lex Fridman
(02:39:39)
How often do people talk about that theory? That’s an interesting theory.
Rick Spence
(02:39:43)
Well, it’s there. It’s just not the one that… Bugliosi obviously wanted to go with Helter Skelter because again, it was a story that people could understand. It was sensational and it would catch on. Also, another probable issue in that was that his star witness was Linda Kasabian. Linda Kasabian, she was present at both the Tate and LaBianca murders. She didn’t participate in the killings, according to her. She drives the car. But everybody else talked about what had happened. Well, okay, she turns [inaudible 02:40:19] evidence and gets total immunity, and it’s largely in her testimony that all the rest of the case is based. Now, if you start throwing into the equation that she proclaimed her love for Bobby Beausoleil, and that she, according to others, was the chief proponent of the copycat killings, well then that would get messy. Now, there’s one guy that’s at the center of this, it’s Charles Manson. He ordered all of this done to ignite a race war, even though, how would any of that do it?
Lex Fridman
(02:40:58)
So that doesn’t make sense. But he is nevertheless at the center of this because he’s the glue of the family. Right?
Rick Spence
(02:41:05)
He exerts a tremendous amount of psychological control over them.
Lex Fridman
(02:41:08)
How was he able to do that? Sorry to interrupt. Because you said he was a petty criminal. It does seem he was pretty prolific in his petty crimes. He did a lot of them.
Rick Spence
(02:41:17)
He had a lot of access to LSD. Which he started getting at the free clinic in San Francisco. So lots of it floating around. Some descriptions of the family at Spahn Ranch is that people were basically taking acid on a daily basis, which by the way was also a potential problem with Linda Kasabian’s testimony since she also admitted to being high most of the time, and also thinking she was a witch. Where do you want to go with that? See, if Manson wasn’t Manson, if he hadn’t actually acted like the crazed hippie, psycho goofball that Bugliosi painted him as being, then Kasabian’s testimony wouldn’t have been as strong because you could… I mean, the first thing against her is you’ve got an immunity for telling the story the prosecution wants. That’s a little iffy, and we won’t even bring in the witch and the drugs and being in love with Bobby Beausoleil. So if Manson had been dressed like you, sitting there in a suit and tie, and behaved himself and spoken normally… This isn’t to say that he wasn’t guilty as hell.

(02:42:38)
So what he supposedly did to inspire all of these killings, and I think that’s probably beginning with the Hinman killing, he told him to go over there and get the money one way or the other. I don’t know whether he told him, “If you don’t get the money, kill him.” But, Hinman’s dead. And then he might also have seen the value in terms of having copycat killings as a way of throwing off any other blame. The other story you get is that one of the people who had lived at the Cielo house where Sharon Tate was before, was a record producer by the name of Terry Melcher. Melcher supposedly, as the general story goes, had welched on a deal with Manson in terms of a record contract. He screwed over Manson in some sort of a record deal, and Manson wanted to get revenge and sent them to kill everybody in the house, which again, doesn’t make much sense. One, Manson knew that Melcher wasn’t living there anymore. He probably knew where Melcher was living. If he wanted to get Melcher, he could have found him. It wasn’t that difficult to do.

(02:43:57)
And so it’s not revenge on Terry Melcher that drew him there. He was familiar with the house. So if the idea was to simply commit random killings that would muddy the whole waters with the Hinman killing, then you might pick some place you knew of. He knew the place was [inaudible 02:44:23]. There would be someone there, and you really didn’t care, in the same way that the LaBiancas seemed to have been. Manson was familiar with that because it supposedly had been the scene of creepy crawling. This is little interesting things that the family would be taught to do. Creepy crawling is when you sneak into somebody’s house at night while they’re there asleep, or when they’re not there, and you move things around. So when they get up in the morning or they come home, they’ll suddenly notice that someone has been in their house, which will freak them out, which is the whole point of that.
Lex Fridman
(02:45:02)
But it doesn’t seem like the murder or the creepy crawling was the… Well, creepy crawling maybe. But it doesn’t seem like the murder… Like some of the other people you’ve covered like the Zodiac Killer, the murder is the goal. Maybe there’s some psychopathic artistry to the murder that the Zodiac Killer had and the messaging behind that. But it seems like, at least the way you’re describing it with the Charles Manson family, the murder was just… They just had a basic disregard for human life, and the murder was a consequence of operating in the drug underworld.
Rick Spence
(02:45:40)
So Manson set up a base, I think called the Spahn Movie Ranch, which was an old movie ranch out on the northwest edge of LA, and they just camped out there. He used the girls, in particular, “Squeaky” Fromme to get the owner or operator, George Spahn to let them hang out there. Basically, she slept with him, and he was perfectly happy to let them hang out. They also had a place out in the desert that they had. They dealt in credit card fraud, stolen cars. It was a chop shop that they ran out of the place. So he had a fairly good little criminal gig going, which with the protection he had probably would’ve… The one thing they couldn’t cover him on was murder.
Lex Fridman
(02:46:31)
So you think if he was an informer, you think there was still a connection between DEA, FBI, CIA, whatever with him throughout this until he committed murder?
Rick Spence
(02:46:41)
Well, the real question is… There is a book written on this by Tom O’Neill called Chaos. I’m not necessarily saying it’s the easiest thing to get through. There’s a lot of material there. I don’t think O’Neill necessarily knows what to make of some of the stuff he came up with, but he does a very good job of demolishing the whole Bugliosi narrative. One of the people he mentions is a name that I had run into elsewhere, and so I really paid attention to it when I saw it again. And the name is Reeve Whitson. Reeve Whitson shows up on the fringes, even though he has no judicial function. He hangs around Bugliosi in the prosecution. He’s just there. In the same way that he was one of these guys… He grew his hair long, wore bell-bottoms, hung around the music community and elsewhere in Hollywood, but no one could tell you exactly what he did. I know what he did later. A decade later, he shows up as a CIA officer in Central America.

(02:47:51)
So Reeve Whitson, later in his career at least, is CIA. What was he in 1969? What is he doing in this? The other thing about it is he appears to have been the person who called… There’s a little question of when the bodies at Cielo Drive are discovered. So the general story is that Sharon Tate’s housekeeper shows up around 8:30 in the morning, finds the bloody scene and goes screaming next door. But there was another fellow who knew… I think the owner of the house is a photographer. Last name may be Hatami. He gets a call earlier in the morning saying that there’d been murders there, and the person he recalls calling him is Reeve Whitson. So someone had been at the house before the bodies were discovered, and they had not called the police. So I don’t know what’s going on there, but it’s a curious situation.

(02:49:07)
And Manson in a lot of ways, self-immolates himself. I mean, his behavior at the trial is bizarre. It’s threatening, it’s disruptive. He’s got his girls out on the street carving X’s in their forehead, carrying knives. One of the attorneys, initially, his attorney, Ron Hughes, becomes Van Houten’s attorney. And he figures out that the three girls, supposedly on Charlie’s insistence, are going to confess. They’ll confess that it was all their idea and Charlie had nothing to do with it. Hughes doesn’t like this because his defense for her is that she was under his influence and therefore not responsible for her own actions. He was having psychic control, so he refuses to go along with it. There’s a break in the trial. He goes camping up in the mountains with some friends, disappears during a rainstorm, and then some months later, his decomposed remains are found.

(02:50:12)
Rumors, always the rumors. What would history be without rumors? Members of the family, they were off at Ron Hughes because he messed up Charlie’s idea to get him off and so they killed him. Maybe they did. Maybe he drowned. That’s absolutely impossible to say. You’ve got that story. There’s a guy named Juan Flynn, who was an employee at the Spahn Ranch, didn’t like Manson, held Manson responsible for the murder of his boss. He would testify that Manson told him that he had ordered all the killings, and that Manson also admitted that he had killed 35 people. Maybe he did. On the other hand, Juan Flynn didn’t like him, and other than his word had no real proof of what he was saying.

(02:51:03)
So please understand me in this case, is that unlike some people who argue that Charles Manson got a raw deal, I don’t think that’s the case. I think that he influenced tremendous influence over the people there through drugs. Sex was another frequent component in it. He had a real whammy over a lot of these people’s minds. I’m not sure how. That still puzzles me. He was a scrawny guy and he wasn’t physically intimidating. I mean, even a lot of women wouldn’t be physically intimidated by him. But he nevertheless had this real psychological power. And if you look around him, the male followers he had were fairly big guys. So he could get people to do what he wanted. And again, to me, the simplest explanation for this is that it began with the Hinman killing, and probably on Manson’s instigation the others were copycat killings to throw off what was going on. If I was a cop, that’s what I would focus on because that seems to make the most sense.
Lex Fridman
(02:52:19)
It still is fascinating that he’s able to have that much psychological control over those people without having a very clear ideology. So, it’s a cult.
Rick Spence
(02:52:29)
Yes. The great focus on Charlie, the leader. The excessive devotion.
Lex Fridman
(02:52:35)
But there’s not an ideology behind that, like something like Scientology or some kind of religious or some kind of… I don’t know, utopian ideology. Nothing like this?
Rick Spence
(02:52:48)
No. I think that Madison, again, was essentially a criminal. He had a sociopathic mindset, and he hit upon a pretty good deal.
Lex Fridman
(02:52:57)
But how do people convince anybody of anything? With a cult, usually you have either an ideology or you have maybe personal relations, like you said, sex and drugs. But underneath that, can you really keep people with sex and drugs? You have to convince them that you love them in some deep sense. There’s a commune of love.
Rick Spence
(02:53:18)
You have a lot of people there in the cult. They have some sort of, what we like to call dysfunctional families. A lot of the females in particular seem to have come from more or less middle-class families, but those are full of dysfunction. Their parents didn’t love them. They were semi-runaways. And now they had this whole family. A lot of the younger women had children, some of them by Manson, some of them by the others. They bonded together.

Zodiac Killer

Lex Fridman
(02:53:53)
And again, we return to that pull towards belonging that gets us humans into trouble. So it does seem that there was a few crimes around this time. So, the Zodiac Killer.
Rick Spence
(02:54:13)
Well, California, where I’m from… I remember this period vividly. By the way, the Tate LaBianca killings occurred on my birthday, the year I graduated from high school. So I remember this.
Lex Fridman
(02:54:28)
Happy birthday.
Rick Spence
(02:54:29)
A term which has been used for that… There’s a writer by the name of Todd Wood who’s [inaudible 02:54:34]… I wish I’d come up with this. Killerfornia. Which is a chronicle of these serial killers and disappearances in the late sixties and seventies. So you’ve got the Zodiac, you’ve got other ones. I mean, I hate to say it, I’m not trying to be flippant about it, but I mean, young female hitchhikers were disappearing at an alarming rate in Northern California. There are bodies that have never been attributed. Some think that they’re-
Rick Spence
(02:55:00)
That have never been attributed. Some think that they’re the Zodiac’s victims, but it was a dangerous time. Edmund Kemper, the co-ed killer was another one. There were a lot of creepy psychopaths running around. I don’t know whether it was something in the water or what was going on, but it was a menacing in some cases. Hitchhiking, especially if you were alone and female, was not something you wanted to do in much of the Golden State, certainly not up around the Bay Area. So a lot of these strange killings that were going on, the Zodiac, it’s one of those things where you have these people who have theories about it, and if you don’t share their theory, then you’re part of the problem in some form or another. So I’m not sure, for instance, that the Zodiac killings were all committed by the same person. I think there might’ve been multiple people involved.

(02:56:02)
And the first killings are all of couples. It’s very clear that they… I remember in my examination of it, one of the things I was looking at specific, what else is there to say about this zodiac killings? What I was going to look at is that there are all of these accusations that there was an occult aspect to it, that there was some sort of ritualistic aspect. So I looked at different things, locations, victims, phases of the moon. That’s always worth looking at. I didn’t find much correspondence in any of those. In one of the killings, I think the one in Lake Berryessa, he does appear in this kind of weird hooded costume. He’s got his symbol that sort of compass or aiming reticle circle with a cross through it. It can mean a variety of things. He used guns and he used knives, but he certainly had to think for couples. Except in the last of the killings, which is of a cab driver in downtown San Francisco, who he shoots in full view of witnesses, which is completely atypical.
Lex Fridman
(02:57:12)
And also when he was stabbing the victims, it doesn’t seem like he was very good at it. Or if the goal was to kill them, he wasn’t very good at it because some of them survived.
Rick Spence
(02:57:23)
Yeah, he’s not particularly thorough about it. He seems to have had much more…. More of the violence seems to be directed at the females than the males.
Lex Fridman
(02:57:33)
So I mean, there’s a couple of questions to ask here. First of all, did people see his face?
Rick Spence
(02:57:38)
There is a composite drawing of his face, which I think is based upon the Stine killing, the cab driver killing, where there were people who saw him or who claimed that they saw him. The other ones were all when it was fairly dark. I’m not sure that anyone else got a look at his face. The one that occurred in the daylight at Berryessa, he was wearing a mask. So there’s something in common initially in the targeting of victims, which doesn’t in the last case. Then after that, there’s just these different cases of where there’s a pretty good case to be made. A woman who claims, I think she and a small child were picked up. Her car broke down, she got a flat tire, and she was picked up by this guy who she got a very sort of strange vibe from who eventually just let her go. Well, that might’ve been the Zodiac. It might not have been.
Lex Fridman
(02:58:35)
You do this kind of rigorous look saying like, okay, what is the actual facts that we know? Reduce it to the thing that we know for sure. And in speaking about his motivation, he said that he was collecting souls.
Rick Spence
(02:58:53)
Souls for the afterlife.
Lex Fridman
(02:58:55)
For the afterlife.
Rick Spence
(02:58:56)
That’s kind of a cultie.
Lex Fridman
(02:58:57)
Yeah, I mean that’s what I believe. Is it the Vikings or the Romans? They believed this in battle.
Rick Spence
(02:59:04)
You’re essentially making sacrificial victims, and they will be your ghostly servants in the afterlife.
Lex Fridman
(02:59:10)
Do you think he actually believed that?
Rick Spence
(02:59:12)
Who knows? I mean, here’s the question. Was he making that up just to be scary or is that what his actual? That’s what he’s saying his motivation is. So let’s take him at face value rather than trying to wish that into the cornfield to get rid of it. Let’s just take it at face. So he’s claiming that he’s killing these people in order to acquire slave servants in the afterlife. He will subsequently go on to claim many more victims, I’m not sure, 44 eventually he will have before he just kind of vanishes. One of the really interesting clues to me when I was looking at that case, which I didn’t find anybody else that tended to make much of it, is that it all has to do with this kind of Halloween card that he sends to the press in San Francisco. And it’s talking about sort of rope by gun by fire, and there’s this whole sort of wheel, like the zodiacs. But what this is drawn from, where he got this from is from a Tim Holt Western comic book published in 1951, and you see the same thing in the cover.

(03:00:27)
It’s Wheel of Fortune, but with different forms of grisly death on it. And all of the things that he mentioned are shown on the cover of this. So whoever put together that card saw that comic book. Well, that’s kind of an interesting clue. So does that mean he’s a comic book collector? When would he have… I mean, that one and also where he got the idea from, and so he’s incorporating these things from. Then there are of course his codes, which people have, which aren’t all that difficult to decipher probably because they weren’t meant to be. The other thing that you find often with serial or psychopathic killers is they’re toying with the press. I mean, this goes all the way back to Jack the Ripper. They get attention, and then he just disappears.
Lex Fridman
(03:01:20)
Why do you think he was never caught?
Rick Spence
(03:01:22)
I don’t think they knew who to look for. There was nothing much to go on. There was a guy who was long a suspect, and then eventually they tested his DNA and find it didn’t match any of the things that they’d found. Again, it goes back to, I’m not even sure that it’s one person who’s responsible for all of them.
Lex Fridman
(03:01:44)
So one of the interesting things you bring up here and our discussion of Manson inspires this, but there does seem to be a connection, a shared inspiration between several killers here, the Zodiac, the Son of Sam later, and the monster of Florence. So is it possible there’s some kind of underworld that is connecting these people?
Rick Spence
(03:02:11)
Well, take the Zodiac and you get his claim that he’s collecting souls for the afterlife. There are other things that are occult-ish connected to that. He may have picked some of the killing sites due to their physical location, to their position in a particular place. If you look at the Son of Sam case, of course, David Berkowitz will on and off claim that he was part of a Satanic cult that was carrying out, again, these killings mostly of couples and young women similar to the Zodiac, and that he had only committed some of them and was witnesses to others. And that has really created the whole idea that yes, there is this some kind of Satanic cult, which engages in ritual murders. Then if you go all the way to Florence, you’ve got murders who go on and off for a long period of time. Again, focusing on couples in isolated areas, which Italian prosecutors ultimately tried to connect to some kind of satanic cult, although I’m not sure they ever made a particularly strong case for that. But that element comes up in all three of them. So you can with a little imagination, argue that those similarities, that those things should come up in each of those cases in different places, either suggest that oddly enough, psychopathic criminals all sort of thinking the same way, or that there is some sort of higher element involved in this, that there’s some kind of common inspiration. Here you come back to something similar we were talking before about, do pedophiles exist? Okay, so do satanic cults exist? Well, they do. Okay. There was one in my hometown, apparently quite harmless as far as I know, never did anything. But there are people who robes. Here we come again, robes, cut the head off a chicken, naked woman as an altar. You can get off on that I suppose, if that’s your thing. So professed satanists exist, satanic cults exist, serial killers exist, ritual murders exist. Are those things necessarily connected? No. Could they be connected? Yes. There’s nothing. Don’t ever tell me that something is just too crazy for people to do because that’s crazy talk.

Illuminati

Lex Fridman
(03:04:58)
You’ve studied secret societies. You gave a lot of amazing lectures on secret societies. It’s fascinating to look at human history through the lens of secret societies because they’ve permeated all of human history. You’ve talked about from everything from the Knights Templar to Illuminati, Freemasons, like we brought up. Freemasons lasted a long time. Illuminati, you’ve talked about in its sort of main form, lasted a short time, but its legend.
Rick Spence
(03:05:26)
Never gone away.
Lex Fridman
(03:05:27)
Never gone away. So maybe Illuminati is a really interesting one. What was that?
Rick Spence
(03:05:33)
Well, the Illuminati that we know started in the 1776. In fact, you can pin it down to a day, the 1st of May, May Day, 1776 in Ingolstadt, Germany, founded by a professor Adam Weishaupt. It wasn’t initially called the Illuminati because that’s not really the name of the organization. It was called the Order Perfectibilists. Apparently that changed. Weishaupt would say things like never let our organization be known under its real name anywhere, which leaves wondering what’s its real name. So Illuminati is simply the plural of Illuminatus, which means one who is illuminated, one who has seen the light. So in Roman times, Christian converts were Illuminati because they had seen the light, anyone who thinks. And there have been organizations called Illuminati. The term is not trademarked, not copyrighted. Anybody who thinks they’ve seen the light about anything is an Illuminati. So it defines nothing.

(03:06:44)
The symbol of the order was an owl, which interestingly enough is almost identical to the owl which is the emblem of the Bohemian Club.
Lex Fridman
(03:06:55)
Oh, boy.
Rick Spence
(03:06:56)
Make of that what you will. I don’t make that much out of it because one owl looks pretty much like another owl to me. But compare them, you got to kind of wonder about, there’s a little, just a little thing. Maybe there’s some kind of connection there. But that supposedly has to do with the connection to the goddess Minerva and the owl was sacred to her and the order was the Minerva of all, the person who was brought in. The number of levels changed over time. There was a higher level, so the order that people at the lower level didn’t know about, pretty typical for this. But the thing about Weishaupt was that he was a luminous correspondent with members with his Illuminati, both during the time that it legally existed in Bavaria and later on.

(03:07:50)
So Weishaupt himself lives, I think until 1830, dies in Gotha, which was ruled by an Illuminati prince. And so nothing ever happens to these. No Illuminati is ever put to death or arrested in prison for any period of time. What happens is that their plan… Well, what was his plan? His plan was to essentially replace all existing religions and governments in the world with a one world order governed by the Illuminati. So to do this, you had to subvert and destroy all the existing order. And he argued the purpose for this is we wish to make men happy and free, but first we must make them good.
Lex Fridman
(03:08:37)
Oh, right.
Rick Spence
(03:08:39)
So that’s what the order is all about. Of course, he also said things like, oh man, is there nothing that you won’t believe? So myth would be used in that. Also thought women should be brought into it. He had a rather interesting view about that was that we should appeal to women in part because women have a chip on their shoulder because they’re left out of things. So we should appeal to their vanity on that point and offer that in the future, all things will be open and they will be emancipated. So we should hold out the prospect of female emancipation to attract them because he argued in the short term, there’s no better way to influence men than through women. Get women on our side by promising them emancipation, but made sure we’ll never actually deliver it to them because the future world will be a boys club.

(03:09:29)
So he talks about these things fairly openly, and this is where you get this idea of some sort of a new world order, which is to be based upon the destruction of the existing order. So there are those who argue that there is a trail of descent that leads from Weishaupt’s Illuminati to the Communist manifesto, and in fact, communism itself, that Marxism was simply a further restating of this idea. And you can draw some sort of, I mean, the idea never entirely goes away. The Bavarian government gets a hold of the order’s, inner texts. So the story is they’re delivered to them. I think that Weishaupt gave them to him. I think he engineered the exposure of his order because it gave him publicity. By being exposed in Bavaria, you gained great renown. And they continued to recruit after this, and the Bavarian government actually bans the Illuminati four different times. Why? Because apparently the first three times didn’t work. So the fourth one does. You can notice that it’s like Papal bans on Freemasonry. They just go on and on and on because this clearly isn’t working.
Lex Fridman
(03:10:52)
And you actually highlight, speaking of publicity, that there’s a difference between visibility and transparency. That a secret society could be visible, it could be known about, it could be quite popular, but you could still have a secrecy within it.
Rick Spence
(03:11:08)
You have no idea what’s going on inside. It’s like a black box. If I set a black box on this table, we can see that there is a black box. What’s in the black box? A cat? Who knows?
Lex Fridman
(03:11:18)
In fact, the secrecy might be the very thing that makes it even more popular.
Rick Spence
(03:11:21)
Adam Weishaupt, again, there is no more convincing than a concealed mystery. Give people a concealed mystery in the thought. So we need to make the order mysterious for that exact reason. Always hold out the possibility that knowledge, special knowledge that no mere mortals have other than you will have in that way. So he senses a lot of things, the use of vanity and ego to recruit people to influence both men and women, it’s quite sophisticated and as you might expect from a professor of canon law trained by Jesuits. So I certainly don’t think that it ceased when it was banned in Bavaria because everybody just scatters and goes elsewhere like Paris. And then you have the French Revolution.

Secret societies

Lex Fridman
(03:12:21)
So the idea of the Illuminati to put it crudely, the branding is a really powerful one. And so it makes sense that there’s a thread connecting it to this day that a lot of organizations, a lot of secret societies can adopt the branding.
Rick Spence
(03:12:39)
Anybody can call it. You can go out and form a club, and call it the Illuminati.
Lex Fridman
(03:12:43)
And if you are effective at it, I think it does attract. It’s the chicken or the egg. But powerful people tend to have gigantic egos, and people with gigantic egos tend to like the exclusivity of secret societies. And so it’s a gravitational force that pulls powerful people to these societies. It’s exclusive.
Rick Spence
(03:13:05)
Only certain. And you also notice something goes back to when we were talking about much earlier when we were talking about intelligence. Remember MEIS? Ego.
Lex Fridman
(03:13:12)
Ego, yeah.
Rick Spence
(03:13:12)
Ease of recruitment and control. That’s a great Achilles heel in human beings, the exploitation of ego.
Lex Fridman
(03:13:21)
And of course, if we go back to the conversation of intelligence agencies, it would be very efficient and beneficial for intelligence agencies to infiltrate the secret societies because that’s where the powerful people are.
Rick Spence
(03:13:36)
Or the secret societies to infiltrate the intelligence agencies.
Lex Fridman
(03:13:39)
Oh boy. Well, I mean that’s actually in all the lectures, I kind of had a sense that intelligence agencies themselves are kind of secret societies, right?
Rick Spence
(03:13:53)
Well, I’ll give you my definition of secret societies, what they come down to. One is that generally their existence isn’t secret. It’s what they do is secret. It’s what’s in the box as opposed to the existence of the box. So one of the most important criteria is that they are self-selecting. You just don’t join. They pick you. They decide whether or not you’re going to, they admit you. And oftentimes they will sort of recruit you. Once you have been recruited, you have to pass tests and initiations, and you also have to swear oaths of loyalty. Those are always very, very critical. So broadly speaking, what the entrance into an intelligence organization does, they decide whether you get in. You just don’t automatically get the job. You have to pass tests, a lie detector test, for instance, field training tests, a whole variety of tests. And then you’re sworn to secrecy. You never talk about what you do ever. Or there will be dire consequences.

(03:15:05)
So the method is very much the same. And also this idea of creating a kind of insular group. The organization is us, and everyone else is outside of that. We are guardians of special knowledge. See, this is the type of thing that would generally happen if you question whatever any kind of intelligence agency did. Well, we know things that you don’t. Why? Because we’re the organization that knows things. We collect information, we know the secrets, we guard the secrets. Therefore, if we tell you, you must believe us.
Lex Fridman
(03:15:45)
I have this sense that there are very powerful secret societies operating today, and we don’t really know or understand them. And the conspiracy theories in spirit might have something to them but are actually factually not correct. So an effective, powerful secret society or intelligence agency is not going to let you know anything that it doesn’t want you to know, right?
Rick Spence
(03:16:13)
They’ll probably mislead you if you get too close. So I think the question is what’s the most powerful or important secret society? Probably the one you don’t know about, one that doesn’t advertise its existence, the one which is never known anywhere under its real name. You’ve got things like the Bohemian Club, you’ve got the Bilderbergers, which is another formed in the 1950s, largely the creation of a guy by the name of Josef Retinger, Polish, mysterious, appears out of who knows where, a schemer for years, a man expelled from Britain, France and the United States at one point or another, long active in the Mexican labor movement. Retinger is a mysterious figure. In fact, I think there was even a book written about him called Eminence Grise, Grey Eminence. The fellow who was the front man for the Bilderbergers was Prince Bernhard of the Netherlands, who was at one point a Nazi and then a Dutch freedom fighter.

(03:17:21)
All right, take your pick. But Retinger is the moving hand behind the whole thing, and I’ll be damned if I can figure out who Retinger is. So the idea is that, well, you get like influential people in media, business, politics, and you bring them together just to talk, to try to find common answers or common questions. It’s all very much sort of Western Anglo-European. It’s all very closely sort of connected to NATO, the whole concept of a kind of Atlanticist world, which is essentially the Anglo-American combine combined with Western Europe. But you got a bunch of these things. I mean, the Council on Foreign Relations is very similar to that and the Bilderbergers, and there’s an overlap with the Bohemian Club. And then you’ve got the Pinay Cercle or Le Cercle, which is more military, but also linked to the so-called secret Gladio. The idea of the Soviets over around Western Europe, there would be a stay behind organization called Gladio. There’d be these freedom fighters.

(03:18:43)
So the question I have about that is that how many secret organizations do you need? I mean, why all these separate groups which often seem to have the same people into them?
Lex Fridman
(03:18:53)
Yeah. The closer I look, the more I wonder the same question we asked about the Russian intelligence agencies is where’s the center of power? It seems to be very hard to figure out. Does the secrecy scare you?
Rick Spence
(03:19:07)
Well, I guess on one level I’m comforted that there’s somebody actually making decisions as opposed to, I mean, what do you want? Do you want chaos or do you want everything kind of rigidly controlled? And I don’t put much stock in the idea that there actually is some small group of people running everything, because if they were, it would operate more efficiently. I do think that there are various disparate groups of people who think that they’re running things or try to, and that’s what concerns me more than anything else.

(03:19:51)
Well, I hate to go back to them again because what you’re bringing up, you go back to the Nazis. They had their whole idea about a new world order, and they only had 12 years to do it. And look what a mess they made. I mean, look at the damage, the physical damage that can be done by an idea inspiring a relatively small group of people controlling a nation based upon some sort of racial or ideological fantasy that has no real basis in reality and yet guides their actions. It’s this differentiation that I always make. And I would try to get across to students between always be clear about what you know and what you believe. You don’t know many things.

(03:20:40)
You know your name, you know when you were born, you probably know who your father is, but that’s not absolute unless you’ve had a DNA test and only if you trust DNA tests. So you know who your mother is. You believe this man is your father. Why? Because your mother told you he was. So you believe things generally because someone has told you this is to be true, but you don’t really know for sure.

(03:21:09)
Well, because we know so little, we tend to go by beliefs. So we believe in this. We believe in that. You believe that your cult leader is the answer to everything. And it seems to be very, very easy to get people to believe things. And then what happens is that whether or not those beliefs have any real basis in reality, they begin to influence your actions. So here again, regrettably in some ways to bring it back to the Nazis, what were the Nazis convinced of? They were convinced that Jews were basically evil aliens. That’s what it comes down to. They weren’t really humans. There’s some sort of evil contamination which we must eradicate. And they set out to do that.
Lex Fridman
(03:21:59)
And they were sure that there’s just a few problems that can be solved. And once you solve them that you have this beautiful utopia where everything would be just perfect, it’d be great, and we can just get there. And I think it’s really strong belief in a global utopia. It just never goes right. It seems like impossible to know the truth in it.
Rick Spence
(03:22:21)
For some reason, not long ago, I was listening on YouTube to old Wobbly songs, the Workers of the World. I don’t know why. I know there was a whole album of Wobbly songs, and there was one of them called Commonwealth of Toil. And like most of them, they’re sort of taken from gospel songs. And it’s talking about in the future how wonderful everything will be in the Commonwealth of Toil that will be. And now these are revolutionary leftists, in this case, Wobblies. But nonetheless, it’s like a prayer for communism everything. Now in the future, everything will be good because the earth will be shared by the toilers. And from each abilities and to each according to his need. And it’s this kind of sweet little song in some way. But I’m just sort of imagining this. If I was going to stage that, I’d have this choir of children singing it with a huge hammer and sickle behind them because that’s what it’s combining. And you can think that the sentiments that express in that song, which are legitimate in some way of all the horrors that then leads to.
Lex Fridman
(03:23:52)
It is fascinating about humans. A beautiful idea on paper, an innocent little idea about a utopian future can lead to so much suffering and so much destruction and the unintended consequences you see described.
Rick Spence
(03:24:08)
The law of unintended consequences.
Lex Fridman
(03:24:10)
And we learn from it. I mean, that’s why history is important. We learn from it hopefully.
Rick Spence
(03:24:13)
Do we?
Lex Fridman
(03:24:15)
Slowly or slow learn.
Rick Spence
(03:24:19)
I’m unconvinced of that, but perhaps.
Lex Fridman
(03:24:22)
Speaking of unconvinced, what gives you hope? If human beings are still here, maybe expanding out into the cosmos 1000, 5,000, 10,000 years from now, what gives you hope about that future, about even being a possible future about it happening?
Rick Spence
(03:24:44)
Most people are cooperative and kind most of the time. And that’s one of those things that can usually be depended upon. And usually you’ll get back to what you put into it. Another thing that I have a weird fascination of watching are people who have meltdowns on airplanes because it’s just bizarre.
Lex Fridman
(03:25:20)
That’s fascinating to watch.
Rick Spence
(03:25:21)
The people who will, there’s some sort of psychotic break that occurs, and it’s always going to end the same way. The cops are going to come on and drag you off the plane. Now. True, and you’re going to inconvenience everybody there. And usually at some point, they don’t care about that. That’s the one little sense of power that they have. So they have some sort of sense of powerlessness. And if their only way of power is just to piss off everybody else on that plane, they’re going to go ahead and do it even though it’s going to lead nowhere for them.
Lex Fridman
(03:25:56)
And there’s similar sometimes psychological behavior in traffic.
Rick Spence
(03:26:00)
Well, the road rage thing.
Lex Fridman
(03:26:01)
The road rage, yeah. It’s fascinating.
Rick Spence
(03:26:03)
And I bet that most, there again, those are all people who up to some point were cooperative and kind and polite, and then they snap. So those are all part of the human makeup as well.
Lex Fridman
(03:26:17)
But also part of the human makeup, difference between humans and chimps is the ability to get together, cooperate on a mass scale over an idea, create things like the Roman Empire did. Laws that prevent us and protect us from crazy human behavior, manifestations of a man, some type of human.
Rick Spence
(03:26:39)
Well, human beings are just weird animals all year round. It’s just completely peculiar. I’m not sure that we’re all together natural.
Lex Fridman
(03:26:46)
But I think we are all together beautiful. There is something magical about humans, and I hope humans stay here even as we get advanced robots walking around everywhere. More and more intelligent robots that claim to have consciousness, that claim they love you, that increasingly take over our world. I hope this magical things that makes us human still persists.
Rick Spence
(03:27:11)
Well, let us hope so.
Lex Fridman
(03:27:13)
Rick, you’re an incredible person. You have so much fascinating work, and it’s really an awesome.
Rick Spence
(03:27:20)
I’ve never had anybody ask me as many interesting questions as you have.
Lex Fridman
(03:27:24)
Thank you so much.
Rick Spence
(03:27:25)
Or as many questions.
Lex Fridman
(03:27:27)
This was so fun. Thank you so much for talking today.
Rick Spence
(03:27:29)
Well, thank you.
Lex Fridman
(03:27:31)
Thanks for listening to this conversation with Rick Spence. To support this podcast, please check out our sponsors in the description. And now, let me leave you words from John F. Kennedy. “The very word secrecy is repugnant in a free and open society. And we are as a people, inherently and historically opposed to secret societies, to secret oaths, and to secret proceedings. We decided long ago that the dangers of excessive and unwarranted concealment of pertinent facts far outweighed the dangers which are cited to justify it.”

(03:28:07)
Thank you for listening and hope to see you next time.

Transcript for Bernie Sanders Interview | Lex Fridman Podcast #450

This is a transcript of Lex Fridman Podcast #450 with Bernie Sanders.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Introduction

Bernie Sanders
(00:00:00)
The ideas that I am talking about are ideas that are widely supported. Everything that I talk about raising them, minimum wage, health care for all, a tax system which demands the billionaires pay their fair share, those are all popular ideas, but people didn’t know. You got to run for president and have 20,000 people come out to your rallies and win 23 states. They say, “Hmm. Well, maybe those ideas are not so crazy after all, and we’ve got to entertain them.” The establishment doesn’t like that. They really don’t. They want to tell you, and this is their main… This is how they succeed. What they say, Lex, is, “The world is the way it is. It always will be this way. We got the wealth. We got the power. And don’t think of anything else. This is the way it is. You have no power. Give up.” They don’t say it quite that way, but that’s really what the intent is.

(00:00:50)
And what we showed is, guess what? Running an outsider campaign, we took on the Democratic establishment, we came close to winning it, and we did win 23 states. And the ideas that we’re talking about are the ideas that working class people, young people believe in.
Lex Fridman
(00:01:10)
The following is a conversation with Bernie Sanders, senator from Vermont and two-time presidential candidate, both times as the underdog who, against the long odds, captivated the support and excitement of millions of people both on the left and the right. This is the Lex Fridman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Bernie Sanders.

MLK Jr

Lex Fridman
(00:01:40)
Growing up, did you ever think you’d be a politician?
Bernie Sanders
(00:01:43)
Nope. Not in a million years.
Lex Fridman
(00:01:47)
Yeah. I know that you hate talking about yourself, which is rare for a politician, I would say. What’s your philosophy behind that? You like talking about the issues. You like talking about-
Bernie Sanders
(00:01:55)
Yeah, I do. Everybody talks about themselves. It’s not about me. Nice guy, not a nice guy. What politics should be about? Is the issues facing the people of our country, the people of the world, and how we’re going to address it. That’s what it should be.
Lex Fridman
(00:02:10)
That said, there’s a interesting aspects to your life story. For example, in 1963, you were very active in the Civil Rights Movement, got arrested even for protesting segregation in Chicago, and you attended the famous March on Washington where MLK gave his I Have a Dream speech. What was that like?
Bernie Sanders
(00:02:30)
It was extraordinary. Took a bus ride down with fellow students in the University of Chicago, and there was a zillion people there. I’m not sure if it was the first time I’d ever been in Washington in my life, but it was a very impressive moment. And what he was talking about, people very often forget about that, it was not only racial justice, it was jobs. Jobs and justice, that was the name of that rally. And so it’s something I’ve never forgotten.
Lex Fridman
(00:02:59)
What influence did he have on you? What’d you learn about the way he enacted change in the world?
Bernie Sanders
(00:03:06)
King was a very impressive guy, more impressive, I think, than people think that he was. And what he did is he created his movement from the bottom on up. So he developed real organization, grassroots organization which put pressure on communities and officials to end segregation, to open up voting patterns. And I think what has to also be remembered about King, which is really quite extraordinary, is he won the Nobel Peace Prize. And there was, oh, you’re great, you’re wonderful. But then to the end of his life, he took on Lyndon Johnson on the war in Vietnam. And as soon as he did that, suddenly the editorial pages throughout America, the establishment papers no longer thought he was so great. In fact, the message sent out, “You’re black. Deal with civil rights. Don’t worry about foreign policy. We’ll take care of that.” But he said, “If I talk about peace and nonviolence, I can’t sit back and allow what’s going on in Vietnam to continue without speaking out.”

(00:04:12)
Incredible courage to do that. And by the way, when he was assassinated at a fighting for the rights of AFSCME workers, garbage, guys that delivered the garbage who were treated terribly, low wages, bad working conditions. And he went out to support their right to form a union. That’s when he got killed.

Corruption in politics

Lex Fridman
(00:04:33)
So on the war front, one of the things that people don’t often talk about, your work in politics. You gave what I think is a truly brave speech on the Iraq War in 2002, I believe. You voted no on the Iraq Resolution, you voted no on the Patriot Act, and you basically predicted very accurately what would happen if we go into Iraq. What was your thinking at the time behind those speeches, behind voting no on the Patriot Act on the Iraq Resolution?
Bernie Sanders
(00:05:09)
It maybe ironically came out of maybe the war in Vietnam and the ease and lies that people told. We went into Vietnam under a lie. We lost close to 60,000 Americans. Millions of people in the Vietnam and Cambodia died as a result of that. So I think twice about it. And then the war in Iraq, you had people like Dick Cheney and others telling us, “Oh, they have nuclear weapons and all that stuff. It’s the only way we can resolve the issue.” I didn’t believe it. I didn’t agree with it. And you’re right, it turns out, historically, I was right.
Lex Fridman
(00:05:46)
What’s the way to fight this thing that Martin Luther King tried to fight, which is the military industrial complex?
Bernie Sanders
(00:05:54)
It’s huge. It gets to the broader issue of where we are as a nation. And what I almost uniquely in Congress talk about is the fact that we are moving, Lex, to an oligarchic form of society. And not a lot of people are familiar with that term, but what it means… We talk about oligarchy in Russia. Oh, Putin is surrounded by the oligarchs. Well, guess what? What do you think is happening in the United States? So what you have right now is an economy with more concentration of ownership than we’ve ever had. All right? That means whether it’s agriculture, transportation, healthcare, whatever it may be, fewer and fewer massively large corporations control what’s produced and the prices we pay. And then you look at our political system, and we don’t talk about it. What is the reality of the political system today? And that is that billionaires are spending huge amounts of money to buy this election. In Trump’s campaign, you got three multi-billionaires spending over $200 million, three people. Democrats have their billionaires. It’s not quite as concentrated.

(00:06:55)
But at the end of the day, billionaires play an enormous role in terms of electing politicians and in Washington in determining what legislation gets seen and not seen.
Lex Fridman
(00:07:07)
But it’s not just single billionaires. It’s companies with lobbyists.
Bernie Sanders
(00:07:10)
You got it. Let me give you one example, lobbyists. We pay, in the United States, by far the highest prices in the world for prescription drugs. This is an issue I’ve been working hard on with some success. Take a wild and crazy guess how many lobbyists are there from the drug companies in Washington D.C.?
Lex Fridman
(00:07:28)
Over a thousand.
Bernie Sanders
(00:07:29)
Over a thousand. There are 100 members of the Senate, 435 members of the House, 535 members of Congress. There are 1800 well-paid lobbyists representing the drug companies, including former leaders of the Republican and Democratic Party. That is why, one of the reasons why we pay the highest prices in the world for prescription drugs. Military-industrial complex, you’ve got a revolving door. People go from the military into the General Dynamics, into Lockheed Martin, and the other large companies, and what we see there is an institution in the Pentagon. We spend a trillion dollars a year on the Pentagon. It is the only federal agency that is not able to submit to an independent audit. So if you think there’s not massive fraud and waste and cost overruns in the Pentagon, you would be sorely mistaken.
Lex Fridman
(00:08:22)
Do you think most politicians are corrupt in accepting the money, or is the system corrupt? Or is it a bit of both?
Bernie Sanders
(00:08:30)
If the corrupt means that, “Hey, here’s $10,000, vote this way,” it doesn’t work like that. Very, very rare. Occasionally. Very, very rare. That’s corruption. What happens is that if you are in a campaign… And right now, the amount of money that people have to raise, you’re running for Senate in Ohio, you’re talking about 50, $60 million. Where the hell are you going to get that money? It’s not going to be $10 donations. You’re going to be surrounding yourself with people who have the money. You’re going to go $5,000 [inaudible 00:09:02], etc. So you surround yourself with those people who say, “Oh, these are my problems. This is what I need, and this is… I need a tax break for billionaires,” blah, blah, blah, blah. So you live in that world. They are your financial support.

(00:09:15)
They are, in a sense, your political base, so you’re very cognizant of what you do in terms of not upsetting them. So it’s not corruption in the sense of people taking envelopes with huge amounts of money to vote a certain way. That very, very rarely, if ever, happens. It is the power of big money to make politicians dependent on those folks. And that’s why when I ran for president, what I probably may be most proud of is the fact that we received millions and millions of campaign contributions averaging 27 bucks apiece, I think, in 2016.
Lex Fridman
(00:09:52)
Have companies, lobbyists ever tried to buy you, tried to influence you?
Bernie Sanders
(00:09:56)
We don’t welcome them into our office. I do deal with these guys, but it’s usually on a confrontational tone. No, so they don’t come into my office very often telling me their problems.
Lex Fridman
(00:10:04)
So how do we fix the system? How do we get money out of politics?
Bernie Sanders
(00:10:09)
Like many other issues, we don’t have to reinvent the wheel here. It exists in other countries. If you go to… Every country has their own election system, but nobody has a system where billionaires can spend unlimited sums of money through super PACs to elect the candidates of their choice. So first thing you got to do… One of the things, Lex, I found that the more important the issue, the less discussion there is. The less important the issue, the more discussion there is. A number of years ago, the United States Supreme Court, in one of its more pathetic decisions, passed the Citizens United decision. What Citizens United Decisions said is you’re a multi-billionaire. You want the freedom. You’re a free person in a free country. You want the freedom to buy the government, and how terrible it would be to deny you the freedom to spend hundreds of millions of dollars on a campaign to elect the candidates. And they said that’s your freedom, and that’s what Citizens United is about.

(00:11:10)
We’ve got to end that. And in my view, we move to public funding of elections. That means you want to run for governing, you want to run for Senate, show that you have some support, get $5 contributions from X number of people to show you you’re not a flake. You have some support and the government will pay a certain amount more, and there’ll be a limit in the amount of money that can be spent. So it’ll be a real… You can run against me and I’m not going to outspend you 10 to one. That’s what we should be moving toward in my view.
Lex Fridman
(00:11:39)
How do we make that happen when there’s so much money in the system and the politicians owe to the people who paid for their election? Does it have to come from the very top, essentially sort of a really strong, popular populist president?
Bernie Sanders
(00:11:54)
But you’re right. You raised exactly the question. If I’m getting a huge amount of money from billionaires, do you think I’m going to go out and announce, “I think billionaires should not be involved in buying elections”? I doubt that very much. So what you’re going to need, and you tell me if I’m missing something, but I pay attention, you don’t hear either of the major candidates talking about that issue, do you?
Lex Fridman
(00:12:16)
I think what happens is when an individual politician speaks out about it, they get punished, but I think this is a popular idea. So if a lot of them speak out, that’s why if it came from the top, if a president was using a very large platform to basically speak out, it provides a safety blanket for the other politicians to get it out of the system. But there has to be kind of a mass movement of it.
Bernie Sanders
(00:12:40)
Yes, it does. And every place I go, I always speak about the issue, and it always… People understand it. You’re a Republican, you’re a Democrat, you’re progressive, you’re conservative, who really believes that we are a democracy when billionaires can spend tens and tens of millions of dollars to buy elections? So it is a very popular issue. It’s important. You’re right. We need political leaders to be speaking out on that, but we need a grassroots movement to say, when somebody is at a town meeting, you’re running for the Senate, you’re running for the House, what’s your view on Citizens United? Are you prepared to vote to overturn that decision and move to public funding of elections? Extraordinarily important.
Lex Fridman
(00:13:18)
So many of your policy proposals are quite radical.
Bernie Sanders
(00:13:23)
No, they’re not. I beg to differ.
Lex Fridman
(00:13:25)
Okay, great.
Bernie Sanders
(00:13:26)
[inaudible 00:13:26]
Lex Fridman
(00:13:27)
Well, they’re popular. So what I mean is relative to what the way other politicians speak, it’s usually a little bit more moderate. So from everything you’ve learned from politics, is it better to go sort of radical, maybe we can come up with a different word, versus a more moderate, safe, ambiguous kind of policies?
Bernie Sanders
(00:13:45)
Okay, let’s talk about it. Fair enough. We talked about one issue, very important, money in politics.
Lex Fridman
(00:13:50)
Money, yes.
Bernie Sanders
(00:13:51)
Getting big money out of politics, do you think that’s a radical idea?
Lex Fridman
(00:13:55)
Well, yeah. It’s a popular idea. It’s an idea that makes sense. But in order to implement it and actually make it happen, it requires to flip the system upside down, right? In that sense, it’s radical.
Bernie Sanders
(00:14:09)
In that sense, it’s radical. But if you go walk down the street here and you say, “Do you think billionaires should be able to spend as much money as they want to buy politicians?” I would say nine out of 10 people will say, “That’s crazy. That’s not what America’s supposed to be about.” So in that sense, it’s certainly not radical. Let’s talk about healthcare. Go out on the street, do it, or do a poll, and I’ve done the polling, is healthcare a human right? Should every American be able to go to a doctor when they need, regardless of their income? Do you know what people say? I would say about 85, 90% of the people say, “Of course.” The idea that healthcare is a human right available to all exists, Lex, in every major country on earth except the United States. So you’re here with me in Burlington, Vermont, right?

(00:14:55)
If you got a car, go 50 miles north to Canada, walk into Canada and ask people, “When you go to the hospital, how much does it cost you, which kind of bill?” And they say, “What are you talking about? Doesn’t cost us anything. It doesn’t cost us a nickel.” That’s the case in virtually every country in Europe. So the idea that healthcare should be available to all or that there should be no out-of-pocket expense because it’s a human right is widespread around the world and very much agreed to in this country. Bottom line is that because of our corrupt political system, we have a healthcare system designed not to provide healthcare to all people, to make huge profits for the drug companies and the insurance companies. And that is what’s happening, and we got to change that system. So I’m a strong advocate, and I’ve led the effort on Medicare for all.

Healthcare in US

Lex Fridman
(00:15:50)
Okay, let’s talk about Medicare for all. If you could snap your fingers today and implement the best possible healthcare system for the United States of America, what would that look like?
Bernie Sanders
(00:16:01)
Well, we have a pretty good system.
Lex Fridman
(00:16:00)
What would that look like?
Bernie Sanders
(00:16:01)
Well, we have a pretty good system, not great, but a pretty good system in Medicare. So it’s there for the elderly and Lyndon Johnson passed that in the 1960s, a huge step forward. It is being chopped away by the private insurance companies through Medicare Advantage. But if you strengthen Medicare and you do away with the kind of deductibles that seniors now have to pay and you do away with other stuff, and you say basically right now you’re a senior in America, go to any doctor you want when you’re in the hospital, Medicare will pay the entire bill if you expand Medicare to cover dental hearing and vision, which it doesn’t now cover. You do all of those things and then the next thing you do is say, okay, to be eligible for Medicare, now you have to be 65. First year we’re going to lower it to 55, then we’ll lower it to 45, then we’ll lower it to 35.

(00:16:52)
Then we’ll have everybody in the system. So I think in a four or five year period you can strengthen Medicare and have everybody in the system. And when you do that, and this is not just me talking, number of studies have pointed this out. When you take the profit motive out of it from the insurance companies and the drug companies, you can end up providing quality care to all people at no more than we’re spending right now. Because right now we are spending twice as much per personal healthcare as the people of any other nation. Incredibly wasteful system.
Lex Fridman
(00:17:23)
So the way to pay for the system is to increase taxes. But you’re saying if you cut that cost and increase the taxes you’re saying it’s going to-
Bernie Sanders
(00:17:31)
Here’s the story, and I’ve gotten my share of 30 second ads attacking me on this. Bernie Sanders wants to raise your taxes on healthcare. It’s true, in a progressive way. But right now, do you have health insurance?
Lex Fridman
(00:17:46)
Yes.
Bernie Sanders
(00:17:46)
Okay. Somebody’s paying for your health insurance. It depends, if you are working, most people get their health insurance through their jobs, okay? So if you’re working for a large company, your employer is paying your health insurance, and by the way, that comes out of your wages. Healthcare costs in America are very high. And your employer will tell you, honestly, look, I can’t give you more than a 3% wage increase because I got a 10% increase in your healthcare costs. You want that? Or if you’re union negotiating, you know what? They’ll say, Hey, you want decent wages? We’re going to have to cut back on your healthcare. That’s what every union has to deal with every negotiating session. So we’re paying for it through employers out of pocket. We pay through it through Medicare, Medicaid, veterans Administration, et cetera. What I am proposing is really not radical. It’s what exists in Canada and other countries.

(00:18:34)
It is publicly funded like the police departments and libraries are like public education. This is publicly funded in a progressive way. So right now, rather than paying out of your own pocket, if you are a family, let’s just say you’re self-employed right now and you have a couple of kids and a wife, it could cost you 15, $20,000 a year in insurance costs. Well, that’s all eliminated. Will you have to pay more in taxes? Of course you will. Maybe it depends on your income level, but it could be that you’d be paying $12,000 more in taxes, but not $20,000 more in premiums, co-payments and deductibles, you save money. So it’s paying taxes rather than paying money to the insurance company. You got a better deal through the tax system.
Lex Fridman
(00:19:23)
So the most painful thing in today’s system is the surprise bills, the number one cause of bankruptcy and the psychological pain that comes from that, just worrying stress in debt.
Bernie Sanders
(00:19:36)
You got it.
Lex Fridman
(00:19:36)
And just basically afraid constantly of getting sick because you don’t know if insurance is going to cover it. And if you’re not insured, you don’t know how much it’s going to cost. So you’re not going to go to the hospital even if there’s something wrong with you, if there’s pain and all that. So you just live in a state of fear, psychological fear. That’s the number one problem. It’s just not just financial, psychological-
Bernie Sanders
(00:19:55)
You got it. Look, and I think you said it very well. I’m chairman of the committee that deals with this stuff. So I talk to a lot of doctors. And doctors in Vermont and all over this country tell me that they’re astounded at people walk into their offices much sicker than they should have been. And the doctor said, why didn’t you come here six months ago when you first felt your symptoms? And they said, well, I have a high deductible. I’ve a $10,000 deductible. I don’t have any money to pay. I’m uninsured. Some of those people don’t make it. Other people, and this is what is totally crazy, they end up in the hospital at huge expense to the system rather than getting the care they need when they needed it. So that is how… I’ll give you another example of it. We pay the highest prices in the world for prescription drugs.

(00:20:46)
One out of four Americans can’t afford the drugs their doctors prescribe. So you walk into the doctor’s office, they say, okay, Look, you got this, that, and the other thing. Here’s a prescription. You can’t afford to fill it. What happens? You get sicker. You end up in the emergency room, which is an extremely expensive proposition. Or you end up in the hospital, rather than dealing with the problem when it occurs. And what is not talked about… I mentioned earlier how we don’t talk about some of the major issues. The estimate is that some 60,000 people in America die every single year unnecessarily because they can’t get to a doctor when they need because of financial reasons. And you want to hear even crazier, one out of four people who get cancer treatment in this country either go bankrupt or deplete their financial resources of their family.

(00:21:33)
So your point is, right. If somebody diagnoses you with cancer, you’re scared to death. You’re worried about how you’re going to live, you’re going to die, what’s going to happen? And then on top of that, you got to worry about whether your family goes bankrupt. How insane and cruel is that? So to me, I think healthcare is what unites us all. Everybody has family. They get sick, we all get born, we all die, we all want care. And we all have got to come together to create a system that works for all of us, not just the drug companies or the insurance companies.
Lex Fridman
(00:22:03)
There’s just so many stories and not even the horrific stories. There’s countless horrific stories, but just basic stories of cost. Like my friend Dr. Peter Attia has this story where he happens to be wealthy so he can afford it, but he had to take his son to the emergency room and the son was dehydrated and the bill was $6,000. They just did a basic test and gave him an IV, a basic thing. And he has really good insurance and the insurance covered $ 4,000 of it. So he had at the end paid $2,000 for a basic emergency room visit. And there’s a lot of families for whom that one visit for such a simple thing would be just financially devastating.
Bernie Sanders
(00:22:43)
And you know what? People know that, and you know what they say? I don’t feel well today. Something’s wrong. I ain’t going to go to that emergency room because I don’t want a $6,000 bill. And what happens? He had insurance that paid two thirds of it, right?
Lex Fridman
(00:22:57)
Yes.
Bernie Sanders
(00:22:58)
So what happens if he didn’t? What happens if he didn’t have money? He’d be handed by bill collectors for the rest of his life. So it is a disgusting system. It is an inhumane system, but the insurance companies and the drug companies are very powerful and they make a lot of campaign contributions, have a lot of lobbyists than we are where we are. But I think the American people want fundamental changes there.
Lex Fridman
(00:23:21)
So that’s another good example of a really popular idea that is not implemented because of the money in politics.
Bernie Sanders
(00:23:29)
You got it. That’s wonderful. And I’ll tell you that not only that, not only is it not implemented because of money, it’s not even discussed. All right? So I’m saying here and no one disputes me, we are spending twice as much per person on healthcare, right? And yet 85 million Americans are uninsured or underinsured, and our life expectancy is lower than virtually every other major country. Do you think that might be an issue that we’d be discussing?

2016 election

Lex Fridman
(00:23:59)
Again, if a single politician discusses it to get punished for it. So there needs to be a mass movement and probably, I mean from my perspective, it has to come from the very top. It has to come from the president. And the president has to be a populist president where they don’t care about the parties with the rich people. They just speak out because they know it’s a popular message and they know it’s the right thing. So speaking of that, you had a historic campaign run for president in 2016, and in the eyes of many people, mine included, you were screwed over by the DNC, especially the WikiLeaks emails showed. What’s your just looking back feelings about that? And you’re angry, are upset?
Bernie Sanders
(00:24:47)
Yeah, of of course I’m angry and of course I’m upset. But when you take on, in this case, the democratic establishment who have controlled that party forever, moneyed interests of the Democratic Party, you’re taking on corporate America when you’re taking on the corporate media. And when you’re calling for a political revolution that creates the government that works for all and not just the few, the opposition is going to be extraordinary. But what I am extremely proud of from that campaign in 2020 as well, is that we took on the anointed candidate of the establishment and we showed, despite the fact the entire establishment I had in the Senate, I had one supporter, there were 50 Democrats, I had one supporter, I had no governor supporting me. I think maybe a few people in the house.

(00:25:46)
But we took on the whole political establishment and we did… We got millions of votes. And the ideas that we brought forth were ideas that they had to eventually deal with in one way or another. And if you look at the American Rescue Plan, which I’m proud to have helped write during the midst of COVID, a lot of the ideas that we fought forward were implemented in that bill. And I want to make them obviously permanent.
Lex Fridman
(00:26:12)
And you almost won. And a lot of people thought that you would win against Donald Trump.
Bernie Sanders
(00:26:17)
I think we would’ve. I think would’ve. Trump is a very… I think he’s a little bit crazy between you and me, but he is a smart politician. And he’s appealing to a lot of the anger that working class people feel. And you know what? Working class people should feel angry, but they should make sure that their anger is directed in the right direction and not against people who are even worse off the nail, which is what demagogues like Trump always do. So I think we had, as I went around the country then, and now we have a lot of support from working class people who understand that there is something wrong.

(00:26:57)
And this is an incredible fact that no one talks about. All right, I’m going to ask you a question. Are you ready for this Lex?
Lex Fridman
(00:27:02)
Let’s go.
Bernie Sanders
(00:27:03)
Here we go. Over the last 50 years, there’s been a massive increase in worker productivity as a result of technology, right? Everyone agrees to that. And I don’t know exactly what is, but the worker today is producing a lot more than the work of 50 years ago doing something similar. Is the worker today in real inflation accounted for dollars making more money than that work 50 years ago?
Lex Fridman
(00:27:27)
Well, there’s a lot of close arguments there, but your point is well taken. It’s either the same or a little bit higher or a little bit lower, depending on the statistics. It has not increased significantly, and the wealth inequality has increased significantly.
Bernie Sanders
(00:27:40)
That is the point. So you would think that if a worker is producing a lot more, that worker would be better off, would be working lesser hours, et cetera. That hasn’t been the case. And what happened in that 50 years is according to the RAND Corporation, there has been a 50 trillion, trillion with a T, redistribution of wealth in the bottom 90% to the top 1%. So you got CEOs today making 300 times more than their workers. You got three people on top owning more wealth on the bottom half of American society. So that’s why people are angry and they’re worried that their kids may have a lowest standard of living than they in the country in the history of the world. So there’s a lot of anger out there, and I think we tap some of that anger in a constructive way, essentially saying, you know what? We don’t need so few to have so much in wealth and power. Let’s distribute it more fairly in America.
Lex Fridman
(00:28:36)
I got to get back to 2016 because it’s such a historic moment. So there’s a lot of fans of yours that wanted you to keep fighting. Because you forgave in the end the establishment and joined them in support. And your fans wanted to keep fighting for a takeover, for a progressive takeover, the Democratic Party. If you just look back and had to do it all over again, what would you do different?
Bernie Sanders
(00:29:02)
Well, by the way, in terms of a takeover of the Democratic Party, we did try, we ran… Do you know who Keith Ellison is? Keith is now the Attorney General of the state of Minnesota. He’s doing a great job. Really one of the outstanding attorneys generals in the country. And Keith was then a member of Congress and we ran Keith to become the head of the DNC and the establishment for the President of the United States on down went crazy. And they beat him by a few votes, not a whole lot. Look you faced… And that’s the exact same position that many of us are in right today. So people say, well, why did you support Hillary Clinton?

(00:29:42)
Yeah, what’s the alternative? Donald Trump? I think Donald Trump is an extremely dangerous person trying to undermine American democracy. So I can’t support him. Hillary Clinton, obviously his views are very, very different than mine. But that in that moment, that’s where politics becomes really tricky and it ain’t easy. And sometimes you have to do things that you’re not really all that excited about. But I think it was right to try to do what I could to prevent Trump from getting elected. And in 2020 I did the same with Biden and we had more success with Biden than we had with Clinton.

Barack Obama

Lex Fridman
(00:30:21)
Well, there’s this interesting story about a long time coming meeting between you and Obama in 2018, I believe. So Ari Rabin-Havt, who was a former deputy campaign manager, wrote a great book I would say about you called The Fighting Soul: On the Road with Bernie Sanders. And he tells many great stories, but one of them is your meeting with Obama. And he says that Obama told you, Bernie… I wish I could do a good Obama impression. Bernie, you’re an Old Testament prophet. A moral voice for our party giving us guidance. Here’s the thing though, prophets don’t get to be king. Kings have to make choices, prophets don’t. Are you willing to make those choices? Basically Obama’s making the case that you have to sort of moderate your approach in order to win. So was Obama right?
Bernie Sanders
(00:31:14)
Look, and again, that’s why politics is very, very fascinating. Sometimes you can run and lose and you really win if your goal is not just individual power, but transforming society. One of my heroes, you mentioned Martin Luther King Jr. who is one of my heroes. Another one of my heroes is Eugene Victor Debs. Does that ring a bell?
Lex Fridman
(00:31:38)
Yeah. Yes.
Bernie Sanders
(00:31:39)
Okay.
Lex Fridman
(00:31:40)
For many reasons, yes.
Bernie Sanders
(00:31:41)
All right. Many listeners may not know who Debs was. Debs was a union organizer in the early 1900s, helped form the American Railway Union, ran for president, I think five times. Ran the last time while he was in a jail cell because of his opposition to World War I and got a million…
Bernie Sanders
(00:32:00)
… while he was in a jail cell because of his opposition to World War I and got a million votes doing that. Debs lost badly in every race that he ran. In 1932, Franklin Delano Roosevelt ran for president. And much of what Roosevelt ended up doing was at least some of what Debs had talked about. Debs helped lay the groundwork for ideas. So sometimes you can lose and win if you’re into transforming society. What my view is, where I disagree with Obama, is I think you have got to raise consciousness among ordinary people. And when people know what’s going on and are prepared in an organized way to fight for change, they can make incredible changes. And we’ve seen that in recent years. Today, we take for granted we have a woman running for president of the United States I’m supporting. We have had other women running for president.

(00:32:54)
We have women governors and senators. Not so many years ago in the United States Senate, there were 98 men, two women. Even before that 1920, it was when women got the right to vote. How did that change? How did women’s role in society change? It changed because women and their male allies stood up in force. Gay rights, old enough to remember that anybody I knew who was gay, you think they would talk about it? Come out about it? No they wouldn’t. That’s changed. We have seen in terms of civil rights, massive changes. Change happens when people at the grassroots level demand that… We talked about a healthcare a moment ago, we will get universal Medicare for all when millions of people make it clear that’s what they want. So I believe politics starts at the grassroots level, and that’s how you got to bring about change.
Lex Fridman
(00:33:42)
So just to go back to Obama though, in many ways, he too is a singular historic figure in American politics who has brought about a lot of change. He’s a symbol I think that would be remembered for a long time. What do you admire most about Obama?
Bernie Sanders
(00:33:59)
Well, I know him. We’re not best friends, but I know him well and we chat every once in a while. First of all, don’t underestimate what it was in 2008 to be the first black president in the history of this country. And I think few would deny that he’s an extraordinarily intelligent guy. Very, very articulate, one of the best speakers that there is in America, and that he and his family, and again, it’s a lot harder than it looks. He and his family for eight years, that’s his wife Michelle and his kids, really held that office in a way that earned I think the respect of the American people, even if people disagreed him politically. So he deserves… And again, don’t underestimate. I think years ago there were people who said, “A black president in our lifetimes never going to happen. Can’t happen. Too racist the country.”

(00:35:00)
He did it. And that is a huge accomplishment. And I think he has had some significant achievements in his presidential tenure. He and I did disagree on a number of issues. I think he will tell you, I think his public stance is that, yeah, if you have to start all over again, he would do Medicare for all single payer. But where we are right now, the best he could do is the Affordable Care Act. Well, we disagree on that and we disagree on other things, but I think he deserves an enormous amount of credit for what he has accomplished.
Lex Fridman
(00:35:36)
And he, like you, also gave a damn good speech opposing the Iraq war before running for president. And that takes courage.
Bernie Sanders
(00:35:44)
Yes, it does.
Lex Fridman
(00:35:45)
But then it also shows that once you get into office, it’s not so easy to oppose or to work against the military industrial complex.
Bernie Sanders
(00:35:53)
It is very hard. People do not fully appreciate how powerful the establishment is, whether it is the healthcare industry, whether it’s the military industrial complex, whether it’s the fossil fuel industry. These people have unlimited amounts of money. They’re very smart lobbyists in Washington D.C, and they’re very, very greedy people. They want it all.

Capitalism

Lex Fridman
(00:36:16)
I have to ask you about capitalism, the pros and cons. So you wrote a book, It’s Okay To Be Angry About Capitalism. That is a thorough, rigorous criticism of I would say hypercapitalism, a certain kind of capitalism that you argue that we are existing in today in the United States. But a lot of people would attribute to capitalism all the amazing technological innovations over the past 70 plus years that have contributed to increase in quality of life in GDP, decrease in poverty, decrease in infant mortality, increase in expected life expectancy. So how do you see the tension, the pros of capitalism and the cons of capitalism?
Bernie Sanders
(00:37:09)
Some of my European friends, they say Bernie, in the United States, you’re considered to be very radical. If you were here in France or Denmark or someplace, you’d be kind of mainstream left guy. Not all that radical. So this is what I think. I mean, I think the best that we could do right now, where we are right now, it’s the great a society which does two things. It encourages innovation, but at the same time, it makes sure that all people in a wealthy nation have a decent standard of living. And some countries, if you look at Scandinavia, and this shocks people because we don’t talk about this at all. So in Scandinavia it has been the case, Denmark, Finland, Norway for years that people have healthcare. That’s not a big thing. You end up in the hospital. So what? They don’t pay a bill.

(00:38:01)
And this shocks people. In America right now, we have people who will get one week, two weeks off paid vacation. Sometimes people get nothing. You know that there are people out there who have vacation all. In Germany, you got six weeks paid vacation and other holidays as well. People are shocked by that. In America, we don’t have paid family and medical leave. The only major country not to do it. Other countries, your wife gets sick, you stay home with her, your kids get sick, not a big deal. You get a certain amount of paid family and medical leave. Cost of prescription drugs are far more affordable. So what you want to do is create what’s called a social safety net. That means I don’t care what your income is, of course you’re going to have healthcare is the human right. Of course you’re going to have housing that is affordable.

(00:38:51)
Of course your kids are going to have great quality education from child care to university without much cost. Every country has a little bit different. But there are countries in the world right now, I think in Germany, I think college is now tuition-free, as I recall, for obvious reasons. They want to have the best educated workforce they can. So in terms of government playing a role in a civilized democratic society of providing all basic needs, healthcare, education, housing, retirement benefits, yes, that is what we’ve got to do. Now, does that mean then that the government is going to run every mom and pop store on the corner? Of course not. You want innovation, you want to go out and start a business, produce a product, good luck to you. Make money. But on the other hand, in terms of even making money, we want you to be able to do that. Come up with good products, good services.

(00:39:46)
But do I think you should end up with $100 billion? No, I don’t. And you know what? It’s funny. I did an interview with Bill Gates, who’s I think the third-wealthiest guy in the country, struggling behind Musk and Bezos I think, and he’s only worth a hundred plus billion. But he gets by. And I said to him, “Bill,” he was supposed to ask me questions. I asked him the question, I said, “Bill, tell me something. You’re an innovator with Microsoft and all that stuff. Did you know that you’d become a multi-billionaire? And was that what motivated you?” And he said, “No.” And I believe he was honestly, “I loved doing whatever. I loved programming.” He was a kid. He started doing that. He loved it. He was motivated by it. Do you think that there are scientists out there who working day and night trying to develop drugs to deal with Alzheimer’s or cancer that they motivate? Boy, if I come up with this drug, I’m going to become a billionaire?

(00:40:39)
So I think we want to reward success. Fine, but you don’t need a billion dollars. We want people to get satisfaction from what they accomplish, the work they’re doing, whether it’s cleaning the street or developing a new drug. So I think we have gone a little bit far, and you’re right, in talking about the book was an attack on I call, you call hypercapitalism or ubercapitalism. But right now, and this is not an American issue, this is a global issue. It’s not an accident that Musk is over there in Saudi Arabia talking to the trillionaire families in the mid-East, these guys, Putin and his friends, you got probably not more than five, 10,000 extraordinarily wealthy families who have unbelievable economic power over 7 billion people on this planet.
Lex Fridman
(00:41:24)
Well, Elon Musk is actually an interesting case because he’s investing all the money back into the businesses. So I think there is a balance to be struck and you just spoke to it, which is we can still celebrate even big companies that are bringing wealth to the world, that are building cool stuff, that are improving quality of life. But we can question of why is it that the working class does not have a living wage? In many cases, and sort of trying to find that balance.
Bernie Sanders
(00:41:52)
That’s right. Look, I am no great fan of Elon Musk, especially in the role that he’s playing right now in Trump’s campaign. But is he a brilliant guy? Of course he is. Does he work like a dog? Of course he does. Does he come up with these incredible innovations in companies? Yes, he does. Does he deserve credit for that? Yeah, he does. But even in terms of encouraging innovation, I would hope that we are focusing on the important issues. I would love to see great innovators figure out how we build the affordable housing that we need, come up with the great drugs that we need to solve many of the terrible illnesses that plague people. Climate change for God’s sakes. All right, do we need innovation? We’re making some progress in this country. Should we do more? What kind of technologies out there can really cut back on carbon emissions?

(00:42:41)
So I hope we focus on some of the most important issues that impact humanity, but reward innovators. I don’t have a problem with that, but I do have a problem when three people end up owning more wealth at the bottom half of American society.
Lex Fridman
(00:42:54)
Maybe you can briefly speak to something you tweeted recently about Donald Trump going to McDonald’s and the minimum wage, I believe of $7.50. Can you just speak to that tweet?
Bernie Sanders
(00:43:05)
Look, nothing new. Trump didn’t invent it. It’s called a photo opportunity. I’ve done one or two in my life too. So you go to a place. He puts on an apron. Good old Donald Trump, just another McDonald’s worker. But anyhow, he was a… So fine, he did his photo op. That’s fine. Kamala Harris was in North Carolina handing out food to people who were victims of the hurricane. Fine. That’s what politicians do. But some reporter asked him, they said, “Mr. Trump, are you for raising the minimum wage?” And that was a fair question because you got, I don’t know how many, but many, many thousands of McDonald’s workers and millions of other American workers right now are trying to get by on 9, 10, 11 bucks an hour. Federal minimum wage is seven and a quarter. You have people working at McDonald’s right now for sure who are working with 12, 13 bucks an hour.

(00:43:55)
So the reporter said, what do you think about raising the federal minimum wage? And he’s, “Oh, these are great workers. I love McDonald’s and so forth.” He didn’t answer the question Well, I think that in the richest country in the history of the world, if you work 40 hours a week, you should not be living in poverty. And that means we should have a federal minimum wage, not absurdly seven and a quarter an hour, but in my view, $17 an hour. Will that solve all the economic problems for working-class people? No, it won’t. It’ll help. It’ll help.

Response to attacks

Lex Fridman
(00:44:25)
Since running for president, you’ve often been attacked, especially from the right about being worth I believe $2 million and owning three houses. So from my perspective, the answer to that is most of your wealth has been earned from writing books and selling those books. And you are one of the most famous politicians in the world. And so your wealth in the context in comparison to other people of that fame level and other politicians is actually quite modest. So what’s your response usually to those attacks?
Bernie Sanders
(00:44:59)
Do I own three residences? Yeah, I do. I live here in Burlington, Vermont. We live in a middle-class neighborhood. Nice house. Guess what? I’m a United States senator and I own a home in Washington DC as do most senators. You live there year after year. Actually when I was in Congress for 16 years, I rented all the time, but I got elected. Okay, got a six-year term. You know what? Let’s buy a house. So we bought a house and guess what? Like many thousands of people in the state of Vermont, I have a summer camp. It’s a nice one on Lake Champlain. That’s it. Now how did I get the money? You’re right. I wrote two best-selling books, including this book on capitalism. It was New York Times bestseller for a while. And also another book was a youth book. I make, I don’t know, $175,000 a year. And that’s more or less how I became the zillionaire that I am.
Lex Fridman
(00:45:57)
Well, I should also mention that sometimes the word mansion is used and I think your residences are quite modest, at least-
Bernie Sanders
(00:46:03)
Normal houses and they’re not… They’re middle-class houses. Very nice house.
Lex Fridman
(00:46:07)
So when you started in politics I read you are worth $1,100.
Bernie Sanders
(00:46:12)
That much.
Lex Fridman
(00:46:13)
Yeah, that much. That’s right. Has the increase in wealth changed your ability to relate to the working class?
Bernie Sanders
(00:46:20)
Well, it’s a good question. And obviously growing up in a working-class family has been maybe the most singularly significant aspect of my politics. I grew up without money in a family that lived in a rent controlled apartment in Brooklyn New York. So that has impacted me. I’ll tell you, I don’t really give a damn about money. I drive a car that’s 11 years old. It’s an old car and money… Here is my jewelry. It’s a solar watch and my wedding ring. That’s about it. I don’t have a Rolex watch, would not be interested in it. But I’ll tell you what has impacted me, my wife who also grew up in a working-class family will tell you the same. We don’t worry… You raise that issue. If we have to go to the doctor, if our kids have to go to the doctor, we go to the doctor.

(00:47:08)
I don’t stay up nights worrying. There was a time I have to worry about how to pay my electric bill. I don’t worry about that anymore. So what has happened that stress, that economic stress of not worrying about a financial disaster, that’s gone and that is enormous. I maybe as much or more than any other member of the Senate work hard not only for, but with working-class people. I’m chairman of the committee deals with labor issues. We have been involved probably in dozens of strikes all over this country. I’ve been on picket lines. So I do my best. It’s a very easy trap to fall into. You can get separated from ordinary people and their struggles. Not hard to do. I try as hard as I can not to do that.
Lex Fridman
(00:47:53)
So sometimes people say, can money buy happiness? I think I agree with you that worry, sort of being able to fill up your car and not worry about how much it’s going to cost or be able to get a-
Lex Fridman
(00:48:00)
And not worry about how much it’s going to cost or be able to get food for dinner and not worry about how much it’s going to cost. Or even, I’ve been poor most of my life, but I’ve been very fortunate recently to have enough wealth to not worry about healthcare, to have insurance, and be able to afford an emergency room visit. And that worry is just such a giant lift off your shoulders.
Bernie Sanders
(00:48:26)
Lex, I think you said it very well. I remember even I saw this change in myself. When I used to go out, and I do the grocery shopping. My wife does a lot of the cooking, I do the grocery shopping. And I used to look at the prices of everything, I do that less now. I said, “What the hell? So what? It costs 50 cents more for this can of stuff. So, what?” But that’s a luxury you have when you don’t have to worry about that. And I don’t have to worry about that.

(00:48:52)
But your point is, again, to me, I don’t like big fancy cars or big fancy homes, don’t go on… My wife will tell you we’ve not been on a real vacation for God knows how long, because I work pretty hard. But the major thing about having money, which is enormously important, is just what you said. I don’t have to worry. If somebody in my family gets sick, I don’t have to worry about that. I don’t have to worry about putting food on the table or paying the mortgage. So, that’s what money has done.

AOC and progressive politics

Lex Fridman
(00:49:21)
Okay. Let me ask you about the future of the Democratic Party. So one of the biggest impacts you’ve had is you’ve been in the fuel, the catalyst for the increase of the progressive caucus, the progressive movement within the Democratic Party. Do you think that is the future, the progressives, even Democratic socialist leaders will take over the party?
Bernie Sanders
(00:49:42)
That is the most important question, regarding to my mind, American politics. One of the successes that we’ve had, and I’m proud to have played a role in this, is that if you go to the House of Representatives right now, you’ll see almost a hundred members of the Progressive Caucus led very well by a woman from Washington, Pramila Jayapal. Does a great job. That’s people like Alexandria Ocasio-Cortez and Ilhan Omar and many others. Many of them are young, often women, people of color. And many of them come from working-class backgrounds. So, what we have been able to do in recent years, elect a number of strong progressives who represent working families very, very effectively.

(00:50:27)
The struggle in the Democratic Party is between the corporate wing and the progressive wing. And the corporate wing takes a whole lot of money, sees its salvation in getting a whole lot of money from wealthy individuals and large corporations. And is not very vigorous in my view, in representing the needs of working-class people. If they were, we would have healthcare for all, we would have a minimum wage that was a living wage, we would not have a housing crisis. We would not have a tax system in which billionaires pay an effective tax rate that is lower than a truck driver or a nurse.

(00:51:11)
So, I think one of the reasons that Trump has had political success is, it’s not so much his ideas. Most working class people don’t think we should give tax breaks to billionaires or worry about the size of Arnold Palmer’s genitalia. But they are angry, people are angry. And the Democrats have not responded effectively to that anger. So, the struggle that we are waging right now is the future of the Democratic Party. Will it be a party of the working class and represent working class issues, whether you black or white or Latino or Asian or whatever you may be? Or will it be a corporately dominated party? That’s the struggle we’re in right now.
Lex Fridman
(00:51:49)
Did you consider running in 2024? From my perspective, I would’ve loved it if you ran. I think you would’ve had a great chance of winning. Not just the primary, but the presidency.
Bernie Sanders
(00:52:00)
I gave about five minutes thought to it. And the reason was we have a slogan of the progressive movement, it’s not about me, it’s about us. And to have taken on Biden, who in my view on domestic issues, has been quite strong, would’ve really split the Democratic Party and laid the groundwork for an easy Trump victory. And that I did not want to see.

(00:52:25)
So sometimes in life, and I know that a lot of younger people don’t agree with me, but you got to make choices which are painful. So I strongly supported Biden, because I liked his domestic record. He’s done some good things against a lot of opposition. And I’m supporting Kamala right now. But I’m doing my best to see that a dangerous guy like Donald Trump does not become president.
Lex Fridman
(00:52:51)
And the hope for you is that there will be future candidates that are populist, that are progressive?
Bernie Sanders
(00:52:56)
Yes, absolutely.
Lex Fridman
(00:52:57)
Let me ask you about AOC. She’s become one of the most influential voices for the progressive cause in the United States. You two had a great conversation on your podcast and in general, you work together. So, what to you is most impressive about her?
Bernie Sanders
(00:53:12)
I really like Alexandria a whole lot. She is a young woman who comes from a working class background. She helped a mother clean houses. She was a bartender in the Bronx, New York. And I’m very proud that my campaign for president inspired her to run. And she ran on a progressive working class program. And she took on one of the more powerful guys, a guy named Joe Crowley, who was pretty high up in the Democratic Party. And she knocked on doors, she had no money. She did a very strong grassroots effort, and I appreciate that. So, that’s number one. I like what she stands for, she’s incredibly smart. And she has that certain charisma that maybe you’re born with it, maybe you develop it. I don’t know.

(00:54:05)
A couple of years ago she came up here to Vermont to spent some time. She and her partner, Riley, came up. And we were out in the street and people saw her and they said, “Oh, Congresswoman.” and she just smiled. And she had an approach to people, which was beautiful. I mean, it wasn’t phony, it was real. But to be a politician, you got to know how to… You could be a great intellectual, but you can’t relate to people. She relates well to people. And so, I think both from a personality perspective, from an intellect perspective, from an ideological perspective, she helped create the Green New Deal concept, the need to create jobs as we transform our energy system away from fossil fuel. Strong advocate for Medicare for all workers rights. So, I’m a big fan of Alexandria.
Lex Fridman
(00:54:53)
What do you think is the most powerful enduring impact you’ve had on American politics? Looking back, you’ve been in it for quite a bit.
Bernie Sanders
(00:55:00)
Well, I don’t know that I can give you a singular answer. I was mayor of this city and proud of what we accomplished here, proud of my accomplishments as a U.S. Senator. When COVID was devastating this country and we had a massive economic downturn, as chairman of the budget committee, I helped write the American Rescue Plan, which put a lot of money into people’s pockets. We cut childhood poverty by 40% by providing a child tax credit. We kept hospitals going, we kept colleges going, kept people from getting evicted, helped get public health out there, people getting the vaccines. I’m proud of that.

(00:55:35)
But at the end of the day, I think what I have shown is that the ideas, gets back to the early part of this conversation, the ideas that I am talking about are ideas that are widely supported. So Donald Trump says, “Oh, Bernie Sanders is a far left.”, it’s like I’m some kind of extremist coming up with ideas that nobody supports. Everything that I talk about, raising the minimum wage, health care for all, a tax system which demands the billionaires pay their fair share, those are all popular ideas. But people didn’t know you got to run for president and have 20,000 people come out to your rallies and win 23 states. And they say, “Well, maybe those ideas are not so crazy after all.” And we’ve got to entertain them.

(00:56:20)
The establishment doesn’t like that. They really don’t. They want to tell you, and this is their main, this is how they succeed. What they say, Lex, is, “The world is the way it is. It always will be this way. We got the wealth, we got the power. And don’t think of anything else. This is the way it is. You have no power. Give up.” They don’t say it quite that way, but that’s really what the intent is.

(00:56:42)
And what we showed is, guess what? Running an outsider campaign, we took on the Democratic establishment, we came close to winning it. And we did win 23 states. And the ideas that we’re talking about are the ideas that working-class people, young people believe in.
Lex Fridman
(00:57:00)
Yeah, you showed that it’s possible to win. And that’s an idea that will resonate for decades to come.
Bernie Sanders
(00:57:07)
And out of that came dozens of candidates now in the House of Representatives, people on city council, people on state legislature who did win.

Mortality

Lex Fridman
(00:57:14)
So we mentioned about the worry of getting sick, the worry of life that many people in the working class are suffering from. But there’s also the worry that we all experience of the finiteness of life. Do you ponder your own mortality? Are you afraid of it?
Bernie Sanders
(00:57:32)
Well, when you’re 83, it does come across.
Lex Fridman
(00:57:33)
All right.
Bernie Sanders
(00:57:35)
Yeah, of course I do. And-
Lex Fridman
(00:57:36)
Are you afraid of it?
Bernie Sanders
(00:57:37)
No, I’m not afraid of death. What I am afraid of, I think, is infirmity. I have been, knock on wood, this is wood, I think, reasonably healthy with an exception. I had a heart attack five years ago. And what blew me away was that my body failed me for the very first time in my life. That was stunning to me, that suddenly, I was in a hospital bed.

(00:58:02)
I have a great deal of compassion for people as we speak, who are in nursing homes, having a hard time walking. Maybe your mental agility is slipping a little bit. That’s tough. That’s what worries me. We are all going to die, and that’s that. So I’m not afraid of that, but that aspect of getting older, and that does concern me.
Lex Fridman
(00:58:27)
That said, your mind is as sharp as any politician that I’ve ever heard. And also just off mic, I should say, just the warmth that you radiate. And I deeply, deeply appreciate that-
Bernie Sanders
(00:58:28)
Oh, thank you.
Lex Fridman
(00:58:42)
… just as a human being. So, you still got it. After all that, after all those speeches, after all those houses, after all of it, there’s still the humility and just the sharpness, the wit is all there. So Bernie, yeah, like I said, I wish you would’ve ran this year, but I also wish that there’s future candidates.
Bernie Sanders
(00:59:05)
Yeah. And there will be, Lex. I absolutely do. And I think you asked about my legacy, the idea that they’re all wonderful, really, really wonderful people who are now, got involved in the political process that are fighting for justice. That’s a great legacy.

Hope for the future

Lex Fridman
(00:59:20)
What gives you hope about the future of this country, about the future of the world?
Bernie Sanders
(00:59:24)
Sometimes one can become very cynical. You look at the terrible wars that are going on right now, you look at the divisiveness in this country, the ugliness, the poverty, you look at climate change. You can get depressed from all that. But I am lucky in this sense, in that I’ve had the opportunity… People often, “What inspires you? How do you keep going?” And I remember, actually it was in California where it really crystallized me. I was at a rally in the agricultural area of California. And we did a rally, it was sunset, thousands of people were out. And you looked around the crowd and there were young people, black and white and Latino and Asian American, huge cross section. There were older people, and they all wanted to make America a very much better country. And it really moved me.

(01:00:17)
I mean, I see that time and time, and I’ve just been on the campaign trail. And you see great people, really beautiful people who, not interested in becoming billionaires. They want to improve life for other people in this country. So, I am grateful that I… It sounds like a platitude. It’s what every politician says, oh, blah, blah, blah, blah. But when you go out around the country, you go to Native American reservations and you go to factories and everything, and you see so many wonderful people. I have been able to see things that many others have not. I’ve been to every state in the country, and that inspires me.
Lex Fridman
(01:00:55)
I share their optimism, I share your optimism. Bernie, I’ve been a fan for a long time. It’s a great honor to speak to you today. Thank you so much.
Bernie Sanders
(01:01:02)
Well, thank you very much for what you’re doing. Let me just say a word about what you’re doing.
Lex Fridman
(01:01:06)
Okay. Let’s go.
Bernie Sanders
(01:01:06)
Return the compliments here.
Lex Fridman
(01:01:09)
Okay.
Bernie Sanders
(01:01:09)
I think there is a growing dissatisfaction with corporate media. And not because it’s fake news or the reporters lie all the time, that’s nonsense. They don’t. But I think people want to hear folks really talk about in a calm manner, about some of the very important issues which are not discussed in corporate media. And I think that’s what you and some others are doing. So, I thank you very much. It’s a very important service to the country.
Lex Fridman
(01:01:35)
And thank you from a mayor perspective, for creating a wonderful town. And I look forward to looking at the fall leaves walking around tonight.
Bernie Sanders
(01:01:44)
Well, I did quite great the leaves. I did create some other things.
Lex Fridman
(01:01:47)
Okay. Thank you so much, Bernie.
Bernie Sanders
(01:01:48)
Thank you, Lex.
Lex Fridman
(01:01:50)
Thanks for listening to this conversation with Bernie Sanders. To support this podcast, please check out our sponsors in the description.

(01:01:57)
And now, let me leave you with some words from Aristotle. ” The real difference between democracy and oligarchy is poverty and wealth. Wherever men rule by reason of their wealth, whether they be few or many, that is an oligarchy. And where the poor rule, that is democracy.” Thank you for listening and hope to see you next time.

Transcript for Graham Hancock: Lost Civilization of the Ice Age & Ancient Human History | Lex Fridman Podcast #449

This is a transcript of Lex Fridman Podcast #449 with Graham Hancock.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Introduction

Graham Hancock
(00:00:00)
The big question for me in that timeline is why didn’t we do it sooner? Why did it take so long? Why did we wait until after 12,000 years ago, really after 10,000 years ago to start seeing the beginnings of civilization?
Lex Fridman
(00:00:15)
The following is a conversation with Graham Hancock, a journalist and author who for over 30 years has explored the controversial possibility that there existed a lost civilization during the last ice age and that it was destroyed in a global cataclysm some 12,000 years ago. He is the presenter of the Netflix documentary series, Ancient Apocalypse, the second season of which has just been released and it’s focused on the distant past of the Americas.

(00:00:46)
A topic I recently discussed with the archeologist Ed Barnhart. Let me say that Ed represents the kind of archeologist scholar I love talking to on the podcast, extremely knowledgeable, humble, open minded, and respectful in disagreement. I’ll do many more podcasts on history, including ancient history. Our distant past is full of mysteries, and I find it truly exciting to explore those mysteries with people both on the inside and the outside of the mainstream in the various disciplines involved. This is the Lex Fridman podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Graham Hancock.

Lost Ice Age civilization

Lex Fridman
(00:01:34)
Let’s start with a big foundational idea that you have about human history. That there was an advanced Ice Age civilization that came before and perhaps seeded what people now call the sixth cradles of Civilization, Mesopotamia, Egypt, India, China, Indies, and Mesoamerica. So let’s talk about this idea that you have. Can you at the highest possible level describe it?
Graham Hancock
(00:01:57)
It would be better to describe it as a foundational sense of puzzlement and incompleteness in the story that we are taught about our past, which envisages more or less, there have been a few ups and downs, but more or less a straightforward evolutionary progress. We start out as hunter-foragers, then we become agriculturalists. The hunter-forager phase could go back hundreds of thousands of years. I mean, this is where it is also important to mention that anatomically modern humans, and we’re not the only humans. We had Neanderthals from, I don’t know, 400,000 years ago to about 40,000 years ago. They were certainly human because anatomically modern humans interbred with them. And we carry Neanderthal genes. There were the Denisovans maybe 300,000 to perhaps even as recently as 30,000 years ago. And again, interbreeding took place. They’re obviously a human species. So we’ve got this background of humans who didn’t look quite like us.

(00:03:08)
And then we have anatomically modern humans. And I think the earliest anatomically modern human skeletal remains are from Jebel Irhoud in Morocco and date to about 310,000 years ago. So the question is what were our ancestors doing after that? And I think we can include the Neanderthals and the Denisovans in that general picture. And why did it take so long? This is one of the puzzles, one of the questions that bother me. Why did it take so long? When we have creatures who are physically identical to us, we cannot actually weigh and measure their brains. But from the work that’s been done on the crania, it looks like they had the same brains that we do with the same wiring. So if we’ve been around for 300,000 plus years at least, and if ultimately in our future was the process to create civilization or civilizations, why didn’t it happen sooner?

(00:04:07)
Why did it take so long? Why was it such a long time? Even the story of anatomically modern humans has kept on changing. I remember a time when it was said that there hadn’t been anatomically modern humans before 50,000 years ago, and then it became 196,000 years ago with the findings in Ethiopia and then 310,000 years ago. There’s a lot of missing pieces in the puzzle there. But the big question for me in that timeline is why didn’t we do it sooner? Why did it take so long? Why did we wait until after 12,000 years ago, really after 10,000 years ago to start seeing what are selected as the beginnings of civilization in places like Turkey, for example. And then there’s a relatively slow process of adopting agriculture. And by 6,000 years ago, we see ancient Sumer emerging as a civilization. And we’re then in the pre-dynastic period in ancient Egypt as well 6,000 years ago, beginning to see definite signs of what will become the dynastic civilization of Egypt about 5,000 years ago.

(00:05:21)
And interestingly round about the same time, you have the Indus Valley civilization popping up out of nowhere. And by the way, the Indus Valley civilization was a lost civilization until the 1920s when railway workers accidentally stumbled across some ruins. I’ve been to Harappa and Mohenjo-Daro, and these are extraordinarily beautifully centrally planned cities. Clearly they’re the work of an already sophisticated civilization. One of the things that strikes me about the Indus Valley Civilization is that we find a steatite seal of an individual seated in a recognizable yoga posture. And that seal is 5,000 years old, and the yoga posture is Mulabandhasana, which involves a real contortion of the ankles and twisting the feet back. It’s an advanced yoga posture. So there it is, 5,000 years ago. And that then raises the question, well, how long did yoga take to get to that place when it was already so advanced 5,000 years ago?

(00:06:24)
What’s the background to this? China, the Yellow River Civilization again, it’s around about the same period, five to 6,000 years ago. You get these first signs of something happening. So it’s very odd that all around the world we have this sudden upsurge of civilization about 6,000 years ago, preceded by what seems like a natural evolutionary process that would lead to a civilization. And yet certain ideas being carried down and manifested and expressed in many of these different civilizations. I just find that that whole idea very puzzling and very disturbing, especially when I look at this radical break that takes place in not just the human story, but the story of all life on Earth, which was the last great cataclysm that the Earth went through, which was the Younger Dryas event. It was an extinction level event. That’s when all the great megafauna of the Ice Age went extinct.

(00:07:28)
It’s after that. It’s after event that we start seeing this what had taken to be the beginnings of the first gradual steps towards civilization, we come out of the upper Paleolithic as it’s defined the end of the old Stone Age and into the Neolithic. And that’s when the wheels are supposedly set in motion to start civilization rolling. But what happened before that and why did that suddenly happen then? And I can’t help feeling, and I’ve felt this for a very long while, that there are major missing pieces in our story. It’s often said that I’m claiming to have proved that there was an advanced lost civilization in the Ice Age. And I am not claiming to have proved that. That is a hypothesis that I’m putting forward to answer some of the questions that I have about prehistory. And I think it’s worthwhile to inquire into those possibilities because the Younger Dryas event was a massive global cataclysm, whatever caused it.

(00:08:32)
And it’s strange that just after it we start seeing these first signs.

Göbekli Tepe

Lex Fridman
(00:08:39)
So the current understanding in mainstream archeology is that after the Younger Dryas is when the civilizations popped up in different places of the globe with a lot of similarities, but they popped up independently.
Graham Hancock
(00:08:54)
Independently. And by coincidence. And by coincidence, those big civilizations that we all remember as the first civilizations, Sumer, Egypt, the Indus Valley Civilization, China, they all pop up at pretty much the same time. That is the mainstream view.
Lex Fridman
(00:09:10)
And they don’t just pop up, they kind of build up gradually. First there’s some settlements.
Graham Hancock
(00:09:15)
Oh, definitely, yes.
Lex Fridman
(00:09:16)
And then there’s different dynamics of how they build up and the role of agriculture. And that is also non-obvious, but it’s just there’s first a kind of settlement, a stabilization of where the people are living. Then they start using agriculture, then they start getting urban centers and that kind of stuff.
Graham Hancock
(00:09:33)
It seems like an entirely reasonable argument. Everything about that makes sense. There is no doubt that you’re seeing evolutionary progress, social evolution taking place in those thousands of years before Sumer emerges. But what’s happening now, really, I spent much of the nineties and the late 1980s investigating this issue of a lost civilization. I wrote a series of books about it. But by 2002 when I published a book called Underworld, which was the most massive and most heavy book that I’ve ever written, because I was writing very defensively at the time. By the time I finished that book, my wife Santha and I spent seven years scuba diving all around the world looking for structures underwater, often led by local fishermen or local divers to anomalies that they’d seen underwater. By the time that book was finished, I thought, actually, I’ve done this story. I’ve walked the walk.

(00:10:26)
I really don’t have much more to say about it. And I turned in another direction and I wrote a book called Supernatural Meetings With the Ancient Teachers of Mankind recently retitled Visionary. And that was about the role of fundamentally about the role of psychedelics in the evolution of human culture. And I didn’t think that I would go back to the lost civilization issue, but Göbekli Tepe in Turkey kept on forcing itself upon me the more and more discoveries there, the 11,600 year date from Enclosure D, which is the two largest megalithic pillars. And I reached a point where I realized I have to get back in the water and I have to investigate this again. And Göbekli Tepe was a game changer, but I think it’s a game changer for everything because Göbekli Tepe, the extraordinary nature of it. We are looking at a major megalithic site, which is at least five and a half thousand years older than Ġgantija in Malta, which was previously considered to be the oldest megalithic site in the world.

(00:11:32)
And this led of course to a huge amount of interest and attention, both from the Turkish government who see the potential tourism potential of having the world’s oldest megalithic site and from archeologists. And this in turn has led to exploration and excavation throughout the region. And what they’re finding throughout that whole region around Göbekli Tepe and going down into Syria and further down into the Jordan Valley as far as Jericho and even across a bit of the Mediterranean into Cyprus, is what Turkish archeologists are now calling the Taş Tepeler civilization. They’re calling it a civilization, the Stone Hills Civilization with very definite identifying characteristics, semi-subterranean circular structures, the use of T-shaped megalithic pillars, sometimes not anywhere near as big as those at Göbekli Tepe. It’s clear that Göbekli Tepe now was not the beginning of this process. It was actually in a way, the end of this process.

(00:12:33)
It was the summation of everything that Stone Hills Civilization had achieved. But what is becoming clear is that this is a period between before the foundation of Göbekli Tepe, as far as we know, that date of 11,600 years ago is the oldest date for Göbekli Tepe. But of course there’s a lot of Göbekli Tepe still underground, so we can’t say for sure that that’s the oldest, but it’s the oldest so far excavated. What we’re seeing is that in that whole region around there, there was something was in motion and it began to go into motion round about the beginning of the Younger Dryas. And this is where these two dates are really important. The Younger Dryas, I’ll round the figures off, begins around 12,800 years ago, and it ends around 11,600 years ago.

(00:13:24)
So Göbekli Tepe’s construction date, if it is 11,600 years ago, if they don’t find older materials, marks the end of the Younger Dryas, but the beginning of the Younger Dryas, we are already seeing the stirrings of the kind of culture that manifests in full form at Göbekli Tepe and after the construction of Göbekli Tepe, in fact, even during the construction of Göbekli Tepe, we see agriculture beginning to be adopted. The people who created Göbekli Tepe were all hunter-foragers at the beginning. But by the time Göbekli Tepe was finished, and it was definitely deliberately finished, closed off, closed down, deliberately buried, covered with earth, covered with rubble, and then topped off with a hill, which is why Göbekli Tepe is called what it is, Göbekli Tepe means pot-bellied hill or the hill of the navel. For a long time, Göbekli Tepe was thought to be just a hill that looked a bit like a pot belly.
Lex Fridman
(00:14:29)
You say how it was discovered, I think this is one of the most fascinating things on Earth, period. So maybe can you say what it is and how it was discovered?
Graham Hancock
(00:14:37)
Well, Göbekli Tepe is first of all the oldest fully elaborated megalithic site that we know of anywhere in the world. It doesn’t mean that older ones won’t be found, but it is the oldest so far found. The part of the site that’s been excavated, which is a tiny percentage of the whole site. We do know. My first visit to Göbekli Tepe was in 2013, and Dr. Klaus Schmidt, the late Dr. Klaus Schmidt, who died a year later, was very generous to me and showed me around the site for over a period of three days. And he explained to me that they’ve already used ground penetrating radar on the site, and they know that there’s much more Göbekli Tepe still underground. So anything is possible in terms of the dating of Göbekli Tepe. But what we have at the moment is a series of almost circular, but not quite circular enclosures, which are walled with relatively small stones.

(00:15:34)
And then inside them you have pairs of megalithic pillars. And the archetypal part of that site is Enclosure D, which contains the two largest upright megaliths, about 18 feet tall and reckoned to weigh somewhere in the range of 20 tons, if I have my memory correct, they’re substantial hefty pieces of stone. It isn’t some kind of extraordinary feat to create a 20 foot tall or 20 ton megalith, nor is it an extraordinary feat to move it. There’s nothing magical or really weird about that. Human beings can do that and always have, besides the quarry for the megaliths is right there. It’s within 200 meters of the main enclosures. So that’s not a mystery, but the mystery is, the mystery is why suddenly this new form of architecture, this massive, massive megalithic pillars appear, and the pillars, one of the things that interests me about the pillars is their alignment.

(00:16:36)
And there is good work that’s been done, which suggests that Enclosure D aligns to the rising of the star Sirius. And the rising points of the star Sirius appear to be mapped by the other enclosures, which are all oriented in slightly different directions. It was the work entirely of hunter-foragers. But by the time Göbekli Tepe was completed, agriculture was being introduced and was taking place there. Now you asked how Göbekli Tepe was found. The answer to that is that there was a survey of that pot-bellied hill in the 1960s by some American archeologists, and they were looking absolutely looking for Stone Age material, for material from the Paleolithic. And they had found some Paleolithic flints, upper Paleolithic flints around there. So it looked like a good place to look. But then they noticed sticking out of the side of the hill, some very finely cut stone, bits of very large and very finely cut stone.

(00:17:38)
And looking at that, the workmanship was so good that those archeologists were confident that it had nothing to do with the Stone Age, and they thought they were looking at perhaps some Byzantine remains, and they abandoned the site and never looked at it further. And it wasn’t until the German Archaeological Institute got involved, and particularly Klaus Schmidt, who I think was a genius, had real insight into this and started to dig at Göbekli Tepe that they’d realized what they’d found, that they’d found potentially the oldest megalithic site in the world. And they’d found it at a place where agriculture, according to the established historical timeline, that’s where agriculture, at any rate in Europe and Western Asia begins. It begins in Anatolia, in Turkey, and then it gradually disseminates westward from there.
Lex Fridman
(00:18:27)
And yet the understanding is it was created by hunter-gatherers.
Graham Hancock
(00:18:31)
It was created by hunter-gatherers. Yeah, there was no agriculture 11,600 years ago in Göbekli Tepe. But by the time Göbekli Tepe was decommissioned, and I use that word deliberately, was closed down and buried. Agriculture was all around it. And this was agriculture of people who knew how to cultivate plants.
Lex Fridman
(00:18:54)
Do we have an understanding when it was turned into a, if I could say a time capsule so protected by forming a mound around it?
Graham Hancock
(00:18:54)
Yes.
Lex Fridman
(00:19:04)
Is it around that similar time?
Graham Hancock
(00:19:05)
It stood from roughly 11,600 years ago to about 10,400 years ago to about 8,400 BC. So around 1200 years it was there, and it continued to be elaborated as a site. And while it was being elaborated as a site, we see agriculture, I’m going to use the word being introduced, there’d been no sign of it before, and suddenly it’s there. And to me, that’s another of the mysteries about Göbekli Tepe. And then with the new work that’s being done, we realize that it’s part of a much wider phenomenon which spreads across an enormous distance. And the puzzling thing is that after Göbekli Tepe there almost seems to be a decline. Things fall down again, and then we enter this long, slow process of the Neolithic, thousands of years, gradual developments until we come to ancient Sumer and Mesopotamia.

(00:20:02)
But agriculture has taken a firm root by then. Actually, one other thing, I’ll just say this in passing. When I talk about a lost civilization introducing ideas to people, I’m often accused of stealing credit from the indigenous people who had those ideas in the first place. So I do find it slightly hypocritical that archeology fully accepts that the idea of agriculture was introduced to Western Europe from Turkey, and that Western Europeans didn’t invent agriculture. It was absolutely introduced by Anatolian farmers who traveled west. So the notion of dissemination of ideas perhaps shouldn’t be so annoying to archeologists as it is.

Early humans

Lex Fridman
(00:20:43)
And perhaps we should also state, if we look at the entirety of history of hominids, humans or hominids have been explorers. I didn’t even know this when I was preparing for this. Looking at Homo erectus 1. 9 million years ago.
Graham Hancock
(00:21:01)
Absolutely.
Lex Fridman
(00:21:01)
Almost right away they spread out through the whole world and we, Homo sapiens evolved from them. And we should also mention, since we’re talking about controversial debates going on, as I understand there’s still debates about the dynamics of all that was going on there. Like we mentioned in Africa that I think the current understanding, we didn’t come from one particular point of Africa, that there’s multiple locations.
Graham Hancock
(00:21:29)
This is the Out of Africa theory. I think it’s more than a theory. It’s really strongly evidenced. Why? Because we’re part of the Great Ape family and it’s an African family.

(00:21:39)
There’s no doubt that human beings, our deep origins are in Africa. But then as you rightly say, there were these very early migrations out of Africa by species that are likely ancestral to anatomically modern humans, including definitely Homo erectus and the astonishingly distant travels that they undertook. Yes, I think there is an urge to explore in all of humanity. I think there is an urge to find out what’s around the next corner, what’s over the brow of the next hill. And I think that goes very deep into human character. And I think it was being manifested in those early adventures of people who left Africa and traveled all around the world and then settling in different parts of the world. I think a lot of anatomically modern human evolution took place outside Africa as well, not only in Africa.
Lex Fridman
(00:22:32)
So I guess the general puzzlement that you’re filled with is given that these creatures explore and spread and try out different environments, why did it take hundreds of thousands of years for them to develop complicated society settlements?
Graham Hancock
(00:22:51)
That’s the first big question. Why did it take so long? And that raises in my mind a hypothesis, a possibility. Maybe it didn’t take so long. Maybe things were happening that we haven’t yet got hold of in the archeological record, which await to be discovered. And of course, there are huge parts of the world that have not been studied at all by archeology, but the fact that huge parts of the world have not been studied at all by archeology is not on its own enough to suggest that we’re missing a chapter in the human story. The reason that I come to that isn’t only puzzlement about that 300,000 year gap. It’s also to do with the fact that there’s common iconography. There’s common myths and traditions, and there’s common spiritual ideas that are found all around the world, and they’re found amongst cultures that are geographically distant from one another and that are also distant from one another in time.

(00:23:53)
They don’t necessarily occur at the same time. And this is where I think that archeology is perhaps desperately needing a history of ideas as well as just a history of things. Because an idea can manifest again and again throughout the human story. So there are particular issues, for example, the notion of the afterlife, destiny of the soul, what happens to us when we die? And believe me, when you reach my age, that’s something you do think about what happens. I used to feel immortal when I was in my forties, but now that I’m 74, I definitely know that I’m not. Well, it would be natural for human beings all around the world to have that same feeling, that same idea. But why would they all decide that what happens to the soul after death is that it makes a leap to the heavens, to the Milky Way, that it makes a journey along the Milky Way, that there it is confronted by challenges, by monsters, by closed gates.

(00:24:54)
The course of the life that that person has lived will determine their destiny in that afterlife journey. And this idea, the path of souls, the Milky Way is called the path of souls. It’s very strongly found in the Americas right from South America through Mexico, through into North America. But it’s also found in ancient Egypt, in ancient India, in ancient Mesopotamia, the same idea. And I don’t feel that that can be a coincidence. I feel that what we are looking at is an inheritance of an idea, a legacy that’s been passed down from a remote common source to cultures all around the world, and that has taken on a life of its own within those cultures. So the remote common source would explain both the similarities and the differences in the expression of these ideas. The other thing, very puzzling thing, is the sequence of numbers that are a result of the precession of the equinoxes.

Astronomical symbolism


(00:25:54)
At least I think that’s the best theory to explain them. Here, I think it’s important to pay tribute to the work of Giorgio de Santillana and Hertha von Dechend. Giorgio de Santillana was professor of history of science actually at MIT, where you are based, back in the sixties. And Hertha von Dechend was professor of the history of science at Frankfurt University, and they wrote an immense book in the 1960s called Hamlet’s Mill, and Hamlet’s Mill differs very strongly from established opinion on the issue of the phenomenon of precession. And I’ll explain what precession is in a moment. Generally, it’s held that it was the Greeks who discovered the precession and the dating on that is put back not very far, maybe 2,300 years ago or so. Santillana and von Dechend are pointing out that knowledge of precession is much, much older than that, thousands of years older than that.

(00:26:58)
And they do actually trace it. I think I’m quoting them pretty much correctly to some almost unbelievable ancestor civilization. Reading that book was one of the several reasons that I got into this mystery in the first place. Okay, now, the precession of the equinoxes, to give it its full name, results from the fact that our planet is the viewing platform from which we observe the stars. And our planet, of course, is rotating on its own axis at roughly a thousand miles an hour at the equator. But what’s less obvious is that it’s also wobbling on its axis. So if you imagine the extended North Pole of the earth pointing up at the sky in our time, it’s pointing at the star Polaris, and that is our pole star. But Polaris has not always been the pole star precisely because of this wobble on the axis of the Earth.

(00:27:52)
Other stars have occupied the pole position, and sometimes the extended North Pole of the earth points at empty space. There is no pole star. That’s one of the obvious results of the wobble on the Earth’s axis. The other one is that there are 12 well-known constellations in our time, the 12 constellations of the zodiac that lie along what is referred to as the path of the sun. The earth is orbiting the sun, and we are seeing what’s behind it, what’s in direct line with the sun in our view. And the zodiacal constellations all lie along the path of the sun. So at different times of the year, the sun will rise against the background of a particular zodiacal constellation. Today we live in the age of Pisces, and it’s definitely not an accident that the early Christians used the fish as their symbol. This is another area where I differ from archeology.

(00:28:46)
Think the constellations of the zodiac were recognized as such much earlier than we suppose. Anyway, to get to the point, the key marker of the year, certainly in the northern hemisphere, was the spring equinox. The question was, what constellation is rising behind the sun? What constellation is housing the sun at dawn on the spring equinox? Right now it’s Pisces. In another 150 years or so, it’ll be Aquarius. We do live in the dawning of the age of Aquarius. Back in the time of the late ancient Egyptians, it was Aries going back to the time of Ramesses or before. Before that it was Taurus and so on and so forth. It’s backwards through the zodiac until 12,500 years ago. You come to the age of Leo when the constellation of Leo houses the sun on the spring equinox. Now this process unfolds very, very, very, very slowly, the whole cycle, and it is a cycle.

(00:29:47)
It repeats itself roughly every 26,000 years. Put a more exact figure on it, 25,920 years. That may be a convention. Some scholars would say it was a bit less than that, a bit more. But you’re talking fractions. It’s in that area, 25,920 years. And to observe it, you really need more than one human lifetime because it unfolds very, very slowly at a rate of one degree every 72 years. And the parallel that I often give is hold your finger up to the horizon, the distant horizon. The movement in one lifetime, in a period of 72 years is about the width of your finger. It’s not impossible to notice in a lifetime, but it’s difficult. You’ve got to pass it on. And what seems to have happened is that some ancient culture, the culture that Santillana and von Dechend call some almost unbelievable ancestor culture, worked out the entire process of precession and selected the key numbers of precession, of which the most important number, the governing number is the number 72. But we also have numbers related to the number 72. 72 plus 36 is 108, 108 divided by two.
Graham Hancock
(00:31:00)
… 36 is 108. 108 divided by two is 54. These numbers are also found in mythology all around the world. There were 72 conspirators who were involved in killing the god Osiris in Ancient Egypt and nailing him up in a wooden coffer and dumping him in the Nile. There are 432,000 in the Rigveda. 432,000 is a multiple of 72.

(00:31:32)
And at Angkor, in Cambodia, for example, you have the bridge to Angkor Thom. And on that bridge you have figures on both sides, sculpted figures, which are holding the body of a serpent. That serpent is Vasuki, and what they’re doing is they’re churning the milky ocean. It’s the same metaphor of churning and turning that’s defined in the story of Hamlet’s Mill, of Amlodhi’s mill. There are 54 on each side. 54 plus 54 is 108. 108 is 72 plus 36. It’s a precessional number according to the work that Santillana and von Dechend did.

(00:32:13)
And the fascination with this numbers system and its discovery all around the world is one of the puzzles that intrigue me. And suggest to me that we are looking at ancestral knowledge that was passed down, and probably was passed down from a specific single common source at one time, but then was spread out very widely around the world.
Lex Fridman
(00:32:37)
One of the defining ways that you approach the study of human history that I think contrasts with mainstream archeology is that you take this astronomical symbolism and the relationship between humans and the stars very seriously.
Graham Hancock
(00:32:55)
I do, as I believe the ancients did.
Lex Fridman
(00:32:57)
I think it’s important to consider what humans would’ve thought about back then. Now we have a lot of distractions. We have social media, we can watch videos on YouTube and whatever. But back then, especially before electricity, the stars is the sexiest thing to talk about.
Graham Hancock
(00:33:18)
There’s no light pollution.
Lex Fridman
(00:33:19)
There’s no light pollution so, I mean, you’re [inaudible 00:33:21]-
Graham Hancock
(00:33:21)
That’s the majesty of the heavens.
Lex Fridman
(00:33:23)
Every single night you’re spending looking up at the stars. And you can imagine there’s a lot of status value to be the guy who’s very good at studying the stars, as the scientists of the day. And I’m sure there’s going to be these geniuses that emerge. They’re able to do two things. One, tell stories about the gods of whatever, based on the stars. And then also, as we’ll probably talk about, use the stars practically for navigation, for example.
Graham Hancock
(00:33:52)
Oh, yeah. Definitely.
Lex Fridman
(00:33:53)
So it makes sense that the stars had a primal importance for the ideas of the times, for the status, for religious explorations.
Graham Hancock
(00:34:07)
It was an ever-present reality, and it was bright and it was brilliant, and it was full of lights. It’s inconceivable that the ancients would not have paid attention to it. It was an overwhelming presence.

(00:34:19)
And that’s one of the reasons why I’m really confident that the constellations that we now recognize as the constellations of the zodiac were recognized much earlier, because it’s hard to miss when you pay attention to the sky, that the sun over the course of the solar year is month by month rising against the background of different constellations. And then there’s a much longer process, the process of precession, which takes that journey backwards and where we have a period of 2,160 years for each sign of the zodiac.

(00:34:49)
I think it would’ve been hard for the ancients to have missed that. They might not have identified the constellations in exactly the same way we do today. That may well be a Babylonian or Greek convention, but that the constellations were there I think was very clear. And that they were special constellations, unlike other ones higher up in the sky which were not on the path of the sun, that people paid attention to.
Lex Fridman
(00:35:11)
Well, but detecting the procession of the equinox is hard because especially they don’t have any writing systems, they don’t have any mathematical systems. So everything is told through words.
Graham Hancock
(00:35:22)
Yeah. Let’s not underestimate oral traditions. Oral traditions, that’s something we’ve lost in our culture today. One of the things that happens with the written word is that you gradually lose your memory.

(00:35:37)
Actually, there’s a nice story from Ancient Egypt about the god Thoth, the god of wisdom, who is very proud of himself because he has invented writing. “Look at this gift,” he says to a mythical pharaoh of that time, “Look at the gift that I’m giving humanity, writing. This is a wonderful thing. It’ll enable you to preserve so much that you would otherwise lose.” And the pharaoh in this story replies to him, “No, you have not given us a wonderful gift. You have destroyed the art of memory. We will forget everything. Words will roam free around the world, not accompanied by any wise advice to set them into context.” And actually that’s a very interesting point. And we do know that cultures that still do have oral traditions are able to preserve information for very long periods of time.

(00:36:27)
One thing I think is clear in any time, in any period of history, is human beings love stories. We love great stories. And one way to preserve information is to encode it, embed it in a great story. And so carefully done that actually, it doesn’t matter whether the storyteller knows that they’re passing on that information or not. The story itself is the vehicle. And as long as it’s repeated faithfully, the information contained within it will be passed on. And I do think this is part of the story of the preservation of knowledge.
Lex Fridman
(00:37:03)
That’s one of the reasons that you take myths seriously.

Younger Dryas impact hypothesis

Graham Hancock
(00:37:06)
I take them very seriously. There’s many reasons, but I can’t help being deeply impressed and deeply puzzled by the worldwide tradition of a global cataclysm within human memory. I mean, we know scientifically that there have been many, many cataclysms in the past going back millions of years. I mean, the best-known one of course is the KPG event as it’s now called, that made the dinosaurs extinct 65 million or 66 million years ago.

(00:37:42)
But has there been such a cataclysm in the lifetime of the human species? Yeah, the Mount Toba eruption about 70,000 years ago was pretty bad. But a global cataclysm, the Younger Dryas really ticks all the boxes as a worldwide disaster, which definitely involved sea level rise, both at the beginning and at the end of the Younger Dryas. It definitely involved the swallowing up of lands that previously had been above water.

(00:38:12)
And I think it’s an excellent candidate for this worldwide tradition of a global cataclysm, of which one of, but not the only, distinguishing characteristics was a flood, an enormous flood, and the submergence of lands that had previously been above water, underwater. The fact that this story is found all around the world suggests to me that the archeological explanation is, look, people suffer local floods all the time. I mean, as we’re talking, there’s flooding in Florida, but I don’t think anybody in Florida is going to make the mistake of believing that that’s a global flood. They know it’s local.

(00:38:52)
But that’s the argument largely of archeology, dealing with the flood myths, or that some local population experienced a nasty local flooding event and they decided to say that it affected the whole world. I’m not persuaded by that, particularly since we know there was a nasty epoch, the Younger Dryas, when flooding did occur, and when the Earth was subjected to events cataclysmic enough to extinguish entirely the megafauna of the ice age.
Lex Fridman
(00:39:20)
There is the Younger Dryas impact hypothesis that provides an explanation of what happened during this period that resulted in such rapid environmental change. So can you explain this hypothesis?
Graham Hancock
(00:39:32)
Yes. The Younger Dryas impact hypothesis, YDIH for short, is not a lunatic fringe theory as its opponents often attempt to write it off. It’s the work of more than 60 major scientists working across many different disciplines, including archeology and including oceanography as well.

(00:39:59)
And they are collectively puzzled by the sudden onset of the Younger Dryas, and by the fact that it is accompanied 12,800 years ago by a distinct layer in the Earth. You can see it most clearly at Murray Springs in Arizona, for example. You can see, it’s about the width of a human hand, and there’s a draw there that’s been cut by flash flooding at some time. And that draw has revealed the sides of the draw.

(00:40:29)
And you can see the cross-section. And in the cross-section is this distinct dark layer that runs through the Earth. And it contains evidence of wildfires, there is a lot of soot in it. There are also nanodiamonds in it. There is shocked quartz in it. There is quartz that’s been melted at temperatures in excess of 2,200 degrees centigrade. There are carbon microspherules. All of these are proxies for some kind of cosmic impact.

(00:40:59)
I talked a moment ago about the extinction of the dinosaurs. Luis and Walter Alvarez, who made that incredible discovery, initially their discovery was based entirely on impact proxies, just as the Younger Dryas is. There was no crater. And for a long time they were disbelieved because they couldn’t produce a crater. But when they finally did produce that deeply buried Chicxulub crater, that’s when people started to say, “Yeah, they have to be right.” But they weren’t relying on the crater, they were relying on the impact proxies. And they’re the same impact proxies that we find in what’s called the Younger Dryas boundary layer all around the world.

(00:41:36)
So it’s the fact that at the moment when the Earth tips into a radical climate shift, it’s been warming up for at least 2,000 years before 12,800 years ago, people at the time must have been feeling a great sense of relief. “We’ve been living through this really cold time, but it’s getting better. Things are getting better.” And then suddenly, around 12,800 years ago, some might say 12, 860 years ago, there’s a massive global plunge in global temperatures, and the world suddenly gets as cold as it was at the peak of the ice age. And it’s almost literally overnight. It’s very, very, very rapid.

(00:42:15)
Normally in an epoch, when the Earth is going into a freeze, you would not expect sea levels to rise. But there is a sea level rise, a sudden one, right at the beginning of the Younger Dryas. And then you have this long frozen period from 12,800 to 11,600 years ago. And then equally, dramatically and equally suddenly the Younger Dryas comes to an end and the world very rapidly warms up. And you have a recognized pulse of meltwater at that time as the last of the glaciers collapse into the sea, called meltwater pulse 1B, around about 11,600 years ago.

(00:42:53)
This is a period which is very tightly defined, it’s a period when we know that human populations were grievously disturbed. That’s when the so-called Clovis culture of North America vanished entirely from the record during the Younger Dryas. And it’s the time when the mammoths and the saber-toothed tigers vanished from the record as well.
Lex Fridman
(00:43:13)
Is there a good understanding of what happened geologically, whether there was an impact or not? What explains this huge dip in temperature and then rise in temperature?
Graham Hancock
(00:43:24)
The abrupt cessation of the global meridional overturning circulation, of which the Gulf Stream is the best-known part, the main theory that’s been put forward up to now, and I don’t dispute that theory at all, is that the sudden freeze was caused by the cutting off of the Gulf Stream basically, which is part of the central heating system of our planet. So no wonder it became cold.

(00:43:53)
But what’s not really been addressed before is why that happened, why the Gulf Stream was cut, why a sudden pulse of meltwater went into the world ocean, and it was so much of it and it was so cold that it actually stopped the Gulf Stream in its tracks. And that’s where the Younger Dryas impact hypothesis offers a very elegant and very satisfactory solution to the problem.

(00:44:14)
Now, the hypothesis, of course, is broader than that. Amongst the scientists working on it are, for example, Bill Napier, an astrophysicist and astronomer. They have assembled a great deal of evidence, which suggests that the culprit in the Younger Dryas impact event or events was what we now call the Taurid meteor stream, which the Earth still passes through twice a year. It’s now about 30 million kilometers wide, takes the Earth a couple of days to pass through it on its orbit. It passes through it in June, and it passes through it at the end of October.

(00:44:54)
The suggestion is that the Taurid meteor stream is the end product of a very large comet that entered the solar system round about 20,000 years ago. Came in from the Oort cloud, got trapped by the gravity of the Sun, and went into orbit around the Sun, an orbit that crossed the orbit of the Earth. However, when it was one object, the likelihood of a collision with the Earth was extremely small.

(00:45:22)
But as it started to do what all comets do, which was to break up into multiple fragments because these are chunks of rock held together by ice, and as they warm up, they split and disintegrate and break into pieces, as it passed through that its debris stream became larger and larger and wider and wider. And the theory is that 12,800 years ago, the Earth passed through a particularly dense part of the Taurid meteor stream and was hit by multiple impacts all around the planet, certainly from the west of North America, as far east as Syria.

(00:45:58)
And that we are by and large not talking about impacts that would’ve caused craters, although there certainly were some, we are talking about air bursts. When an object is 100 or 150 meters in diameter and it’s coming in very fast into the Earth’s atmosphere, it is very unlikely to reach the earth, it’s going to blow up in the sky. And the best known recent example of that is the Tunguska event in Siberia, which took place on the 30th of June 1908.

(00:46:33)
The Tunguska event was, nobody disputes, it was definitely an air burst of a cometary fragment. And the date is interesting because the 30th of June is the height of the Beta Taurids. It’s one of the two times when the Earth is going through the Taurid meteor stream. Well, luckily that part of Siberia wasn’t inhabited, but 2,000 square miles of forest were destroyed. If that had happened over a major city, we would all be thinking very hard about objects out of the Taurid meteor stream and about the risk of cosmic impact.

(00:47:05)
So the suggestion is that it wasn’t one impact, it wasn’t two impacts, it wasn’t three impacts, it was hundreds of air bursts all around the planet. Coupled with a number of bigger objects, which the scientists working on this think hit the North American ice cap largely. Some of them may also have hit the Northern European ice cap, resulting in that sudden otherwise unexplained flood of meltwater that went into the world ocean and caused the cooling that then took place.

(00:47:34)
But this was a disaster for life all over the planet. And it’s interesting that one of the sites where they find the Younger Dryas boundary and where they find overwhelming evidence of an air burst and where they find all the shocked quartz, the carbon microspherules, the nanodiamonds, the trinitite, and so on and so forth, all of those impact proxies are found at Abu Hureyra. That was a settlement within 150 miles of Gobekli Tepe, and it was hit 12,800 years ago and it was obliterated. Interestingly, it was re-inhabited by human beings within probably five years, but it was completely obliterated at that time. And it is difficult to imagine that the people who lived in that area would not have been very impressed by what they saw happening by these massive explosions in the sky and the obliteration of Abu Hureyra.

(00:48:30)
Now this is a theory, the Younger Dryas impact. It’s a hypothesis actually, it’s not even a theory. A theory is, I think, considered a higher level than a hypothesis. That’s why it’s the Younger Dryas impact hypothesis. And of course it has many opponents and there are many who disagree with it. And there have been a series of peer-reviewed papers that have been published supposedly debunking the Younger Dryas impact hypothesis. One, I think was in 2011, it was called a Requiem for the Younger Dryas Impact Hypothesis. And there’s one just been published a few months ago or a year ago called a Complete Refutation of the Younger Dryas Impact Hypothesis, something like that, some lengthy title.

(00:49:14)
So it’s a hypothesis that has its opponents, and even within those of us who are looking at the alternative side of history, there are different points of view. Robert Schoch from Boston University, the geologist who demonstrated that the erosion on the Sphinx may well have been caused by exposure to a long period of very heavy rainfall, he doesn’t go for the Younger Dryas impact hypothesis. He fully accepts that the Younger Dryas was a global cataclysm and that the extinctions took place, but he thinks it was caused by some kind of massive solar outburst.

(00:49:50)
What everybody’s agreed on is the Younger Dryas was bad, but there is dispute about what caused it. I personally have found the Younger Dryas impact hypothesis to be the most persuasive, which most effectively explains all the evidence.
Lex Fridman
(00:50:05)
How important is the impact hypothesis to your understanding of the ice age advanced civilizations? Is it possible to have another explanation for environmental factors that could have erased most of an advanced civilization during this period?
Graham Hancock
(00:50:21)
In a sense, it’s not the impact hypothesis that is central to what I’m saying, it’s the Younger Dryas that’s central to what I’m saying. And the Younger Dryas required a trigger, something caused it. I think the Younger Dryas impact hypothesis, the notion that we’re looking at a debris stream of a fragmenting comet, and we can still see that debris stream because it’s still up there and we still pass through it twice a year, is the best explanation. But I don’t mind other explanations. It’s good that there are other explanations. The Younger Dryas is a big mystery, and it’s not a mystery that’s been solved yet.

(00:50:55)
And that word, advanced civilization, this is another word that is easily misunderstood. And I’ve tried to make clear many, many times that when we consider the possibility of something like a civilization in the past, we shouldn’t imagine that it’s us, that it’s something like us. We should expect it to be completely different from us, but that it would’ve achieved certain things.

(00:51:22)
Amongst the clues that intrigue me are those precessional numbers that are found all around the world, and are a category of ancient maps called Portolanos, which suddenly started to appear just after the crusade that entered Constantinople and sacked Constantinople, the Portolanos suddenly start to appear. And they’re extremely accurate maps. The most of the ones that have survived are extremely accurate maps of the Mediterranean alone, but some of them show much wider areas.

(00:51:54)
For example, on these Portolano-style maps, you do find a depiction of Antarctica again and again. And another thing that these maps have in common is that many of the mapmakers state that they base their maps on multiple older source maps, which have not survived. These maps are intriguing because they have very accurate relative longitudes.

(00:52:16)
Our civilization did not crack the longitude problem until the mid-18th century with Harrison’s chronometer, which was able to keep accurate time at sea so you could have the time in London and you could have the local time at sea at the same time. And then you could work out your longitude. There might be other ways of working out longitude as well, but there it is. The fact is these Portolanos have extremely accurate relative longitudes.

(00:52:43)
Secondly, some of them show the world, to my eye, as it looked during the ice age. They show a much extended Indonesia and Malaysian peninsula and the series of islands that make up Indonesia today are all grouped together into one landmass. And that was the case during the ice age. That was the Sunda Shelf. And the presence of Antarctica on some of these maps also puzzles and intrigues me and is not satisfactorily explained in my view by archeology, which says, “Oh, those mapmakers, they felt that the world needed something underneath it to balance it so they put a fictional landmass there.”

(00:53:21)
I don’t think that makes sense. I think somebody was mapping the world during the last ice age, but that doesn’t mean that they had our kind of tech. It means that they were following that exploration instinct. That they knew how to navigate. They’d been watching the stars for thousands of years before, they knew how to navigate and they knew how to build seagoing ships. And they explored the world and they mapped the world.

(00:53:46)
Those maps were made a very, very long time ago. Some of them, I believe, were likely preserved in the Library of Alexandria. I think even then they were being copied and recopied. We don’t know exactly what happened to the Library of Alexandria, except that it was destroyed. I suggest it’s likely this was during the period of the Roman Empire. I suggest it’s likely that some of those maps were taken out of the library and taken to Constantinople, and that’s where they were liberated during the crusade and entered world culture again and started to be copied and recopied.
Lex Fridman
(00:54:23)
From this perspective, when we talk about advanced ice age civilization, it could have been a relatively small group of people with the technology of their scholars of the stars and their expert seafaring navigators.
Graham Hancock
(00:54:38)
Yes, that’s about as far as I would take it. And when I say that, as I have said on a number of occasions, that it had technology equivalent to ours in the 18th century, I’m referring specifically to the ability to calculate longitude. I’m not saying that they were building steam engines. I don’t see any evidence for that.
Lex Fridman
(00:54:57)
And perhaps some building tricks and skills of how to [inaudible 00:55:03].
Graham Hancock
(00:55:02)
Well, definitely. And this, again, is where you come to a series of mysteries, which are perhaps best expressed on the Giza Plateau in Egypt with the three Great Pyramids. And the extraordinary megalithic temples that many people don’t pay much attention to on the Giza Plateau and the Great Sphinx itself. This is an area of particular importance in understanding this issue.

The Great Pyramid and the Sphinx of Giza

Lex Fridman
(00:55:31)
Well, can you actually describe the Sphinx and the Great Pyramids and what you find most mysterious and interesting about them?
Graham Hancock
(00:55:37)
Well, first of all, the astronomy. And here I must pay tribute to two individuals, actually three individuals in particular. One of them is John Anthony West, passed away in 2018. He was the first person in our era to begin to wonder if the Sphinx was much older than it had been.

(00:55:57)
Actually, he got that idea from a philosopher called Schwaller de Lubicz, who’d noticed what he thought was water erosion on the body of the Sphinx. John West picked that up, and he was a great amateur Egyptologist himself. He spent most of his life in Egypt and he was hugely versed in Ancient Egypt. And when he looked at the Sphinx and at the strange scalloped erosion patterns and the vertical fissures, particularly in the trench around the Sphinx, he began to think maybe Schwaller was right, maybe there was some of some sort of flooding here.

(00:56:29)
And that’s when he brought Robert Shoch, second person I’d like to recognize, geologist at Boston University. He brought Shoch to Giza, and Shock was the first geologist to stick his neck out, risk the ire of Egyptologists, and say, “Well, it looks to me like the Sphinx was exposed to at least a thousand years of heavy rainfall.” And as Shoch’s calculations have continued, as he’s continued to be immersed in this mystery, he’s continuously pushed that back. And he’s now, again, looking at the date of around 12,000, 12,500 years ago during the Younger Dryas for the creation of the Great Sphinx.

(00:57:05)
And then, of course, this is the period of the wet Sahara, the humid Sahara. The Sahara was a completely different place during the ice age. There were rivers in it, there were lakes in it, it was fertile, it was possibly densely populated, and there was a lot of rain. There’s not no rain in Giza today, but there’s relatively little rain. Not enough rain to cause that erosion damage on the Sphinx.

(00:57:31)
The next person who needs to be mentioned in this context is Robert Bauval. Robert and I have co-authored a number of books together. Unfortunately, Robert has been very ill for the last seven years. He’s got a very bad chest infection. And I think also that Robert became very demoralized by the attacks of Egyptologists on his work. But Robert is the genius, and it does take a genius sometime to make these connections because nobody noticed it before, that the three pyramids of Giza are laid out on the ground in the pattern of the three stars of Orion’s belt.

(00:58:09)
And skeptics will say, “Well, you can find any buildings and line them up with any stars you want,” but Orion actually isn’t any old constellation. Orion was the god Osiris in the sky. The ancient Egyptians called the Orion constellation Sahu, and they recognized it as the celestial image of the god Osiris. So what’s being copied on the ground is the belt of a deity, of a celestial deity. It’s not just a random constellation.

(00:58:36)
And then when we take precession into account, you find something else very intriguing happening. First of all, you find that the exact orientation of the pyramids as it is today, and pretty much as it was when they’re supposed to have been built 4,500 years ago, it’s not precisely related to how Orion’s Belt looked at that time. There’s a bit of a twist, they’re not quite right. But as you precess the stars backwards, as you go back and back and back and you come to around 10,500 BC, 12,500 years ago in the Younger Dryas, you find that suddenly they lock perfectly. They match perfectly with the three pyramids on the ground.

(00:59:20)
And that’s the same moment that the Great Sphinx, an equinoctial monument, aligned perfectly to the rising sun on the spring equinox. Anybody can test this through themselves. Just go to Giza on the 21st of March, be there before dawn, stand behind the Sphinx, and you will see the sun rising directly in line with the gaze of the Sphinx. But the question is what constellation was behind the Sphinx? And 12,500 years ago it was the constellation of Leo. And actually the constellation of Leo has a very Sphinx-like look. And I and my colleagues are pretty sure that the Sphinx was originally a lion entirely. And that over the thousands of years, it became damaged, it became eroded, particularly the part of it that sticks out the head. There were periods when the Sphinx was completely covered in sand, but still the head stuck out.

(01:00:14)
By the time you come to the Fourth Dynasty, when the Great Pyramids are supposedly built, by the time you come to the Fourth Dynasty, the lion, original lion head, would’ve been a complete mess. And we suggest that it was then re-carved into a pharaonic head. Egyptologists think it was the pharaoh Khafre, but there’s no real strong resemblance, but it’s definitely wearing the nemes headdress of an ancient Egyptian pharaoh. And we think that that’s a result of a recarving of what was originally not only a lion-bodied, but also a lion-headed monument.

(01:00:50)
It wouldn’t make sense if you create an equinoctial marker in the time of Khafre 4,500 years ago, and the Sphinx is an equinoctial marker. I mean, it’s 270 feet long and 70 feet high and it’s looking directly at the rising sun on the equinox. If you create it then, you’d be more likely to create it in the shape of a bull, because that was the age of Taurus, when the constellation of Taurus housed the sun on the spring equinox. So why is it a lion? And again, we think that’s because of that observation of the skies and putting on the ground as above, so below, putting on the ground an image of the sky at a particular time.

(01:01:32)
Now, the fact that the Giza Plateau, it’s a fact, of course, that Egyptologists completely dispute, but the fact that the principle monuments of the Giza Plateau, the three Great Pyramids and the Great Sphinx, all lock astronomically on the date of around 10,500 BC, to me, is most unlikely to be an accident. And actually, if you look at computer software at the sky at that time, you’ll see that the Milky Way is very prominent and seems to be mirrored on the ground by the river Nile-
Graham Hancock
(01:02:00)
…prominent and seems to be mirrored on the ground by the river Nile. I suggest that may be one of the reasons amongst many why Giza was chosen as the site for this very special place. The point I want to make is that an astronomical design on the ground, which memorializes a very ancient date, does not have to have been done 12,500 years ago. If, from the ancient Egyptian point of view, you’re there 4, 500 years ago, and there’s a time 8,000 years before that, which is very, very, very important to you, you could use astronomical language and megalithic architecture to memorialize that date on the Giza Plateau, which is what we think we’re looking at, except for one thing, and that’s the erosion patterns on the Sphinx.

(01:02:52)
We are pretty sure that the Sphinx, at least, does date back to 12 and a half thousand years ago and with it, the megalithic temples, the so-called Valley Temple, which stands just to the east and just to the south of the Sphinx and the Sphinx temple, which stands directly in front of the Sphinx. The Sphinx temple has largely been destroyed. But the Valley Temple, attributed to Khafre on no good grounds whatsoever, is a huge megalithic construction with blocks of limestone that weigh up to 100 tons each. Yet, it has been remodeled/refaced with granite. There are granite blocks that are placed on top of the core limestone blocks. Those core limestone blocks were already eroded when the granite blocks were put there. Why? Because the granite blocks have actually been purposefully and deliberately cut to fit into the erosion marks on the, we believe, much older megalithic blocks there.

(01:03:56)
I think Giza is a very complicated site. I would never seek to divorce the dynastic ancient Egyptians from the Great Pyramids. They were closely involved in the construction of the Great Pyramids as we see them today. But what I do suggest is that there were very low platforms on the Giza Plateau that are much older and that when we look at the three Great Pyramids, we are looking at a renovation and a restoration and a enhancement of much older structures that had existed on the Giza Plateau for a much longer period before that. Actually, the Great Pyramid is built around a natural hill. That natural hill might’ve been seen as the original primeval mound to the ancient Egyptians.
Lex Fridman
(01:04:44)
So the idea is that the Sphinx was there long before the pyramids, and the pyramids were built by the Egyptians to celebrate further an already holy place.
Graham Hancock
(01:04:55)
Yeah. There were platforms in place where the pyramids stand, not the pyramids as we see them today, but the base of those pyramids was already in place at that time.
Lex Fridman
(01:05:08)
What’s the evidence that the Egyptologists use to make the attributions that they do for the dating of the pyramids and the Sphinx?
Graham Hancock
(01:05:16)
Well, the three great pyramids of Giza are different from later pyramids. This is another problem that I have with the whole thing is the story of pyramid building. When did it really begin? The timeline that we get from Egyptology is the first pyramid is the pyramid of the Pharaoh, Djoser, the Step Pyramid at Saqqara, about 100 years or so before the Giza pyramids were built. Then, we have this explosion in the fourth dynasty of true pyramids. We have three of them attributed to a single Pharaoh, Sneferu, who built, supposedly, the pyramid at Meidum and the two pyramids at Dahshur, the Bent and the Red Pyramid.

(01:06:06)
Then, within that same 100-year span, we have the Giza pyramids being built. This is according to the Orthodox chronology. Then, suddenly, once the Giza project is finished, pyramid building goes into a massive slump in Ancient Egypt. The pyramids of the Fifth Dynasty are, frankly speaking, a mess outside. They’re very inferior constructions. You can hardly recognize them as pyramids at all. But what happens when you go inside them is you find that they’re extensively covered in hieroglyphs and imagery, repeating the name of the king who was supposedly buried in that place. Whereas, the Giza pyramids have no internal inscriptions whatsoever. What we do have is one piece of graffiti about which there is some controversy.

(01:06:56)
Basic statistics: it’s a 6 million-ton structure. Each side is about 750 feet long. It’s aligned almost perfectly to true north, south, east, and west within 3/60ths of a single degree, the 06ths, because degrees are divided into 60s. It’s the precision of the orientation and the absolute massive size of the thing plus its very complicated internal passageways that are involved in it. In the ninth century, the Great Pyramid still had its facing stones in place, but there was an Arab Caliph, Khalifa al-Mamun, who had already realized that other pyramids did have their entrances in the north face. Nobody knew where the entrance to the Great Pyramid was. But he figured if there’s an entrance to this thing, it’s going to be in the north face somewhere. He put together a team of workers. They went in with sledgehammers. They started smashing where he thought would be the entrance. They cut their way into the Great Pyramid for a distance of maybe 100 feet. Then, the hammering that they did dislodged something. They heard a little bit further away, something big falling, and they realized there was a cavity there. They started heading in that direction. Then, they joined the internal passageway of the Great Pyramid, the descending and the ascending corridors that go up.

(01:08:28)
When you go up the ascending corridor, every one of the internal passageways in the Great Pyramid that people can walk in slopes at an angle of 26 degrees. That’s interesting because the angle of slope of the exterior of the Great Pyramid is 52 degrees. We know mathematicians were at work as well as geometers in the creation of the Great Pyramid.

(01:08:50)
If you go up the Grand Gallery, which is at the end of the so-called ascending corridor, and it’s above the so-called Queen’s Chamber… You go up the Grand Gallery. You’re eventually going to come to what is known as the King’s Chamber in which there is a sarcophagus. That sarcophagus is a little bit too big to have been got in through the narrow entrance passageway. It’s almost as though the so-called King’s Chamber was built around the sarcophagus, already in place.

(01:09:17)
Above the King’s Chamber are five other chambers. These are known as relieving chambers. The theory was that they were built to relieve the pressure on the King’s Chamber of the weight of the monument. But I think what makes that theory dubious is the fact that even lower down, where more weight was involved, you have the Queen’s Chamber, and there are no such relieving chambers above that.

(01:09:38)
In the top of these five chambers, a British adventurer and vandal called Howard Vyse, who dynamited his way into those chambers in the first place, allegedly found… Well, he claims he found a piece of graffiti left by a work-gang naming the Pharaoh Khufu. It’s true. I’ve been in that chamber, and there is the cartouche of Khufu there. Quite recognizable. But the dispute around it is whether that is a genuine piece of graffiti dating from the Old Kingdom or whether Howard Vyse actually put it there himself because he was in desperate need of money at the time. I’m not sure what the answer to that question is. But it’s one of the reasons that Egyptologists feel confident in saying that the pyramid is the work of Khufu. Another is what is called the Wadi al-Jarf Papyri, where, on the Red Sea, the diary of an individual Merer was found. He talks about bringing highly polished limestone to the Great Pyramid. It’s clear that what he’s talking about is the facing stones of the Great Pyramid. He’s not talking about the body of the Great Pyramid. He’s talking about the facing stones of the Great Pyramid during the reign of Khufu. That’s another reason why the Great Pyramid is attributed to Khufu. But I think that Khufu was undoubtedly involved in the Great Pyramid and in a big way. But I think he was building upon and elaborating a much older structure.

(01:11:09)
I think the heart of that structure is the subterranean chamber, which is 100 feet vertically beneath the base of the Great Pyramid. Anybody who suffers from claustrophobia will not enjoy being down there. You’ve got to go down a 26-degree sloping corridor until a distance of about 300 feet. It’s 100 feet vertically, but the slope means you’re going to walk a distance of… Not walk. You’ve got to ape walk. You’re almost going to have to crawl. I’ve learned from long experience that the best way to go down these corridors is actually backwards. If you go forward, you keep bumping your head on them because they’re only three feet five inches high. You get down to the bottom. You have a short horizontal passage, and then you get into the subterranean chamber.

(01:11:54)
The theory of Egyptology is that this was supposed to be the burial place of Khufu, but after cutting out that 300-foot long, 26-degree sloping passage, a lot of which passes through bedrock, and having cut the subterranean chamber out of bedrock, gone to all that trouble, they decided they wouldn’t bury him there. They built what’s now known as the Queen’s Chamber as his burial chamber. But then they decided that wouldn’t do either. They then built the King’s chamber, and that’s where the Pharaoh is supposed to have been buried. Those Arab raiders under Khalifa al-Mamun didn’t find anything in the Great Pyramid at all.
Lex Fridman
(01:12:31)
Your idea is that the Sphinx and maybe some aspects of the pyramid were much earlier. Why that’s important is, in that case, it would be evidence of some transfer of technology-
Graham Hancock
(01:12:47)
Yes.
Lex Fridman
(01:12:47)
…from a much older civilization. The idea is that during the Younger Dryas, most of that civilization was either destroyed or damaged, and they desperately scattered across the globe.
Graham Hancock
(01:13:01)
Seeking refuge.
Lex Fridman
(01:13:01)
Seeking refuge and telling stories of maybe, one, the importance of the stars, their knowledge about the stars, and their knowledge about building and knowledge about navigation.
Graham Hancock
(01:13:17)
That’s roughly the idea. It’s interesting that the ancient Egyptians have a notion of an epoch that they call Zep Tepi, which is the first time. It means the first time. This is when the gods walk the earth. This is when seven sages brought wisdom to Ancient Egypt. That is seen as the origin of ancient Egyptian civilization. There are king lists… by the ancient Egyptians themselves. There are king lists that go back way beyond the First Dynasty/go back 30,000 years into the past in Ancient Egypt, considered to be entirely mythical by Egyptologists. But nevertheless, it’s interesting that there’s that reference to remote time.

(01:14:02)
Now, what you also have in Egypt are what might almost be described as secret societies. The followers of Horus are one of those specifically tasked with bringing forward the knowledge from the first time into later periods. The souls of Pe and Nekhen are another one of these mysterious secret society groups who are possessors of knowledge that they transmit to the future. What I’m broadly suggesting is that those survivors of the Younger Dryas cataclysm, who settled in Giza may have been relatively small in number. It’s interesting that they’re referred to in the Edfu Building Texts as seven sages because that repeats again and again. It’s also in Mesopotamia.

(01:14:51)
It’s seven sages, seven Apkallu, who come out of the waters of the Persian Gulf and teach people all the skills of agriculture and of architecture and of astronomy. It’s found all around the world that there was a relatively small number of people who took refuge in Giza, who benefited from the survival skills of the hunter-foragers who lived at Giza at that time, and who also passed on their knowledge to those hunter-foragers. But it was not knowledge that was ready to be put into shape at that time. That knowledge was then preserved and kept and handled within very secretive groups that passed it down over thousands of years. Finally, it burst into full form in the fourth dynasty in Ancient Egypt.

(01:15:38)
The notion that knowledge might be transferred over thousands of years shouldn’t be absurd. We know, for example, in the case of ancient Israel… It goes back to the time of Abraham, which is pretty much, I think, around 2000 BC. Knowledge has been preserved from that time right up to the present day. If you can preserve knowledge for 4,000 years, you can probably preserve it for eight.

Sahara Desert and the Amazon rainforest

Lex Fridman
(01:16:05)
Now, of course, the air bars on this are quite large, but if an advanced ice-age civilization existed, where do you think it was? Where do you think we might find it one day if it existed, and how big do you think it might have been?
Graham Hancock
(01:16:19)
Well, this is where I’m often accused of presenting a God-of-the-gaps argument, that I think there was a lost civilization because there’s lots of the earth that archeologists have never looked at. Of course, I’m not thinking that. These are very special gaps that I’m interested in. I’m interested in them because of all the curiosities and the puzzlement that I’ve expressed to you before. It’s not just because they’re gaps in the archeological record. It’s because those gaps involve places that were very interesting places to live during the ice age. They specifically include the Sahara Desert, which was not a desert during the ice age and went through this warm wet period when it was very, very fertile. Certainly, some archeology has been done in the Sahara, but it’s fractional. It’s tiny. I think if we want to get into the true origins of Ancient Egyptian civilization, of the peoples of Ancient Egypt, we need to be looking in the Sahara for that.

(01:17:19)
The Amazon rainforest is another example of this. I think the Sahara is about 9 million square kilometers. The Amazon that’s left under dense canopy rainforest is about 5 million square kilometers, maybe closer to six. Then, you have the continental shelves that were submerged by sea level rise at the end of the ice age. Now, it’s well established that sea level rose by 400 feet, but it didn’t rise by 400 feet overnight. It came in dribs and drabs. There were periods of very rapid, quite significant sea level rise, and there were periods when the sea level was rising much more slowly. That 400-foot sea level rise is spread out over a period of about 10,000 years. But there are episodes within it like meltwater pulse 1B like meltwater pulse 1A when the flooding was really immense.
Lex Fridman
(01:18:12)
How big do you think it might’ve been? Do you think it was spread across the globe? If there were expert navigators, do you think they spread across the globe?
Graham Hancock
(01:18:23)
Well, the reason that I’m talking about the gaps is I don’t know where this civilization started or where it was based. All I’m seeing are clues and mysteries and puzzles that intrigue me and which suggest to me that something is missing from our past. I’m not inclined to look for that missing something in, for example, Northern Europe, because Northern Europe was not a very nice place to live during the ice age. I mean, nobody smart would build a civilization in Northern Europe 12,000 years ago. It was a hideous, frozen wasteland. The places to look are places that were hospitable and welcoming to human beings during the ice age. That, of course, includes the coastlines that are now underwater. Of course, it includes the Sahara Desert. Of course, it includes the Amazon rainforest as well. All of these places, I think, are candidates for “my lost civilization.” Because I think, largely from those ancient maps, that it was a navigating seafaring civilization, I suspect that it wasn’t only in one place. It was probably in a number of places.

(01:19:31)
Then, I can only speculate. Maybe there was a cultural value where it was felt that it was not appropriate to interfere with the lives of hunter-foragers at that time. Maybe it was felt that they should keep their distance from them, just as, even today, there is a feeling that we shouldn’t be interfering too much with the uncontacted tribes in the Amazon rainforest. Although interestingly, some of those tribes are now using cell phones. That possibility may have been there in the past. Only when we come to a global cataclysm does it become essential to have outreach and, actually, to take refuge amongst those hunter-forager populations. That is the hypothesis that I’m putting forward. I’m not claiming that it’s a fact. But, for me, it helps to explain the evidence.
Lex Fridman
(01:20:24)
That speaks to one of the challenges that archeologists provide to this idea, is that there is a lot of evidence of humans in the ice age and they appear to be all hunter-gatherers. But, like you said, only a small percent of areas where humans have lived have been studied by archeologists.
Graham Hancock
(01:20:46)
That’s right. Very tiny percent. Even a tiny percent of every archeological site has been studied by archeologists, too. Typically, one to 5% of any archeological site is excavated.
Lex Fridman
(01:20:55)
I mean, that’s why Göbekli Tepe fills my mind with imagination, especially seeing it as a time capsule. It’s almost certain that there is places on earth we haven’t discovered that, once we do, even if it’s after the ice age, will change our view of human history. What would be your dream thing to discover, like Göbekli Tepe, that says a definitive perturbation to our understanding of ice age history?
Graham Hancock
(01:21:29)
Some archive. Some hall of records. There’s both mystical associations with the Hall of Records at Giza from people like the Edgar Cayce organization. There’s also ancient Egyptian traditions which suggest that something was concealed beneath the Sphinx. This is not an idea that is alien to Ancient Egypt. It’s quite present in Ancient Egypt. So far, as far as I know, nobody has some dug down beneath the Sphinx. Of course, there’s very good reasons for that. You don’t want to damage the place too much. But let’s call it the Hall of Records. I’d love to find that.

(01:22:09)
But I think in a way that’s what Göbekli Tepe is. Göbekli Tepe is a hall of records. It’s interesting that just as I’ve tried to outline, I hope reasonably clearly, that the three great pyramids of Giza match Orion’s belt in 10,500 BC just as the Sphinx matches Leo in 10,500 BC, 12,500 years ago or so. Pillar 43 in Enclosure D at Göbekli Tepe contains what a number of researchers, myself included, regard as an astronomical diagram. Martin Sweatman of Edinburgh University has brought forward the best work in this field. But it was initially started by a gentleman called Paul Burley who noticed that one of the figures on Pillar 43 is a scorpion, very much like we represent the constellation of Scorpio today and that above it is a vulture with outstretched wings, which is in a posture very similar to the constellation that we call Sagittarius. On that outstretched wing is a circular object, and the suggestion is that it’s marking the time when the sun was at the center of the dark rift in the Milky Way at the summer solstice 12 and a half thousand years ago. That’s what it’s marking.

(01:23:28)
It’s interesting that the same date can be deduced from Pillar… Of course, it’s controversial. Martin Sweatman’s ideas are by no means accepted by archeology. But he’s done very, very thorough, detailed, statistical work on this. I’m personally convinced. We have a time capsule at Göbekli Tepe, which is memorializing a date that is at least 1,200 years before Göbekli Tepe was built if that dating of 11,600 years ago proves to be absolutely the oldest date as it is at present. The date memorialized on Pillar 43 is 12,800 years ago, the beginning of the Younger Dryas, the beginning of the impact event.

(01:24:09)
Then, Giza does the same thing but in much larger scale. It uses massive megalithic architecture, which is very difficult to destroy, and a profound knowledge of astronomy to encode a date in a language that any culture which is sufficiently literate in astronomy will be able to decode. We don’t have to have a script that we can’t read like we do with the Indus Valley civilization or with the Easter Island script. We don’t have to have a script that can’t be interpreted. If you use astronomical language, then any astronomical literate civilization will be able to give you a date.

(01:24:48)
Hoover Dam has a star map built into it. That star map is part of an exhibition that was put there at the founding of the Hoover Dam. What it does is it freezes the sky above the Hoover Dam at the moment of its completion. Oscar Hansen, the artist who created that piece said so specifically that this would be so that any future culture would be able to know the time of the dam’s construction. You can use astronomy and architecture to memorialize a particular date.
Lex Fridman
(01:25:22)
Quick pause. Bathroom break.
Graham Hancock
(01:25:24)
Sounds good.

Response to critics

Lex Fridman
(01:25:25)
To me, the story that we’ve been talking about… It is both exciting if the mainstream archeology narrative is correct and the one you’re constructing is correct. Both are super interesting because the mainstream archeology perspective means that there’s something about the human mind from which the pyramids/these ideas spring naturally. You place humans anywhere. You place them on Mars. It’s going to come out that way. That’s an interesting story of human psychology that then becomes even more interesting when you evolve out of Africa with homo sapiens, how they think about the world. That’s super interesting. Then, if there’s an ancient civilization/advanced civilization that explains why there’s so many similar types of ideas that spread, that means that there’s so much undiscovered still about the spring of these ideas of civilization that come. To me, they’re both fascinating. I don’t know why there’s so much infighting.
Graham Hancock
(01:26:29)
I think it’s partly territorial. I cannot speak of all archeologists, but some archeologists feel very territorial about their profession. They do not feel happy about outsiders entering their realm, especially if those outsiders have a large platform. I’ve found that the attacks on me by archeologists have increased step-by-step with the increase of my exposure. I wasn’t very interesting to them when I just had one minor bestseller in 1992 with a book called The Sign and the Seal. But when Fingerprints of the Gods was published in 1995 and became a global bestseller, then I started to attract their attention and appear to have been regarded as a threat to them.

(01:27:26)
That is the case today. That is why Ancient Apocalypse Season 1 was defined as the most dangerous show on Netflix. It’s why the Society for American Archeology wrote an open letter to Netflix asking Netflix to reclassify the series of science fiction. It’s why they accused the series of antisemitism, misogyny, white supremacism, and… I don’t know, a whole bunch of other things like that, that have nothing to do with anything that’s in the series. It was like, “We must shut this down. This is so dangerous to us.” There are many more dangerous things in the world than a television series going on right now. But maybe it was seen as a danger to archeology, that this non-archeologist was in archeological terrain and being viewed and seen and read by large numbers of people. Maybe that was part of the problem.

(01:28:28)
Human nature being what it is, I noticed that two of my principal critics, John Hoopes from the University of Kansas and Flint Dibble, who’s now teaching at the University of Cardiff in Wales in the UK, are both people who like to have media exposure. John Hoopes has just recently started a YouTube channel. Flint Dibble has had one for quite a while. A pretty small number of followers. I think that they feel that they should be the ones who are getting the global attention and that it’s not right that I am and that the best way to stop that is to stop me, to shut me down, to get me canceled and basically requiring Netflix to relabel my series from a documentary to a science fiction, which is what they actually had the temerity to suggest to Netflix.

(01:29:24)
If that had gone through, if Netflix had listened to them, that would’ve effectively been the cancellation of my documentary series. It would no longer have been ranked under documentaries. It was a deliberate attempt to shut me down. I see that going on again and again, and it’s so unfortunate and so unnecessary. I’ve become very defensive towards archeology. I hit back. After 30 years of these attacks on my work, I’m tired of it. I do defend myself. Sometimes, I’m perhaps over-vigorous in that defense. Maybe I was a little bit too strong in my critique of archeology in the first season of Ancient Apocalypse. Maybe I should have been a bit gentler and a bit kinder. I’ve tried to reflect that in the second season and to bring also many more Indigenous voices into the second season, as well as the voices of many more archeologists.
Lex Fridman
(01:30:16)
Yeah. In general, I got a chance to get a glimpse of the archeology community. In archeology/in science, in general, I don’t have much patience for this arrogance or snark or dismissal of general human curiosity that I think your work inspires in people. That’s why people like Ed Barnhart, who I recently had a conversation with… He radiates kindness and curiosity as well. It’s like that kind of approach to ideas, especially about human history, it inspires people.
Graham Hancock
(01:30:54)
Exactly.
Lex Fridman
(01:30:55)
Inspires millions of people to ask questions.
Graham Hancock
(01:30:57)
Exactly. Exactly.
Lex Fridman
(01:30:57)
I mean, that’s why you had Keanu Reeves on the new season. He’s basically coming to the show from that same perspective of curiosity.
Graham Hancock
(01:31:05)
Keanu is genuinely curious about the past and very, very interested in it. He’s bringing to it questions that everybody brings to the past. He’s speaking for every man in the series.
Lex Fridman
(01:31:17)
Given that, can you maybe steelman the case that archeologists make about this period that we’ve been talking about? Can make the case that that is indeed what happened; is it was hunter-gatherers for a long time, and then there was a cataclysm, a very difficult period in human history with the Younger Dryas, and that changed the environment and then led to the springing up of civilizations at different places on earth? Can you make the case for that?
Graham Hancock
(01:31:50)
No, I completely understand why that is the position of archeology because that’s what they’ve found. Archeology is very much wishing to define itself as a science. The techniques of weighing, and measuring, and counting are very key to what archeology does. In what they’ve found and what they’ve studied around the world, they don’t see any traces of a lost civilization. We live in a very politically correct world today. The idea that some lost civilization brought knowledge to other cultures around the world is seen as almost racist or colonialist in some way. It triggers that aspect as well.

(01:32:39)
But basically, I think majority of archeologists are in complete good faith on this. I don’t think that anybody’s really seeking to frame me. I think that what we are hearing from most archeologists… some much more vicious than others. But what we’re hearing from most archeologists is this is what we found, and we don’t see evidence for a lost civilization in it. To that, I…
Graham Hancock
(01:33:00)
… civilization in it. And to that, I must reply, “Please look at the myths. Please consider the implications of the Younger Dryas. Please look at the ancient astronomy. Please look at those ancient maps and don’t just dismiss them and sneer at them. And for God’s sake, please look more deeply at the parts of the world that were immensely habitable and attractive during the ice age and that have hardly been studied by archaeology at all, before you tell us that your theory is the only one that can possibly be correct.” In fact, it’s a very arrogant and silly position of archeology, because archaeological theories are always being overthrown. It can take years, it can take decades. It took decades in the case of the Clovis-First hypothesis for the settlement of the Americas. But sooner or later a bad idea will be kicked out by a preponderance of evidence that that idea does not explain.
Lex Fridman
(01:33:57)
If we can just look back at your debate with Flint Dibble on Joe Rogan Experience, what are some takeaways from that? What have you learned? Maybe what are some things you like about Flint? You said that he’s one of your big critics, but what do you like about his ideas? And what were you maybe bothered by?
Graham Hancock
(01:34:17)
First of all, just very recently, and it can be found on my YouTube channel and it’s signaled on my website, I have made a video. Runs about an hour, which looks at a series of statements that Flint made during the debate, which I was not prepared to answer. And it turns out that some of those statements are not correct. The notion, for example, that there were three million shipwrecks that have been mapped, Flint actually uses the word “mapped.” Three million shipwrecks that have been mapped at one point in the debate. And I’ve put that clip into the video that I brought out. That is not a fact, that is an estimate, a UNESCO estimate. And actually in the small print on one of the slides that he has on the screen, you can see the word “estimate,” but he never expresses that word out loud. So those who are listening to the podcast rather than watching it wouldn’t even have a chance to see that. And I, sitting there in the studio didn’t see that word estimate either.

(01:35:19)
And I didn’t know that. I thought, “My God. If Flint has a point here. If there’d been three million shipwrecks found and mapped, if that’s the case, the absence of any shipwreck from a lost civilization of the ice age is a problem.” But then I discovered that it isn’t three million shipwrecks that have been mapped. It’s much, much less than that. And maybe it’s 250,000. Still a large number, but most of them from the last 1,000 years. And unfortunately, what Flint didn’t go into, and perhaps he should have shared with the audience … And again I go into this in the video, is that there is indisputable evidence that human beings were seafarers as much as 50 or 60,000 years ago. The peopling of Australia involved a relatively short 90 kilometers, 100-kilometer ocean voyage. But nevertheless, it was an ocean voyage.

(01:36:09)
And it must have involved a large enough people, a large enough number of people to create a permanent population that wouldn’t go extinct. The settlement of Cyprus is the same thing. It was always an island even during the ice age. And no ships have survived that speak to the settlement of Australia, and no ships have survived that speak to the settlement of Cyprus either. But that doesn’t mean that that thing didn’t happen.
Lex Fridman
(01:36:32)
I [inaudible 01:36:33] linger on this, because for me it was, the shipwrecks thing was convincing. And then looking back, first of all, watching your video, but also just realizing the peopling of Australia part, that’s mind boggling. 50,000 years ago. Just imagine being the person standing on the shore, looking out into the ocean. Standing on the shore of a harsh environment, looking out the ocean, a harsh environment and deciding that, “You know what? I’m going to go towards near certain death and explore-
Graham Hancock
(01:37:03)
You don’t know what’s on the other side of that water. You can’t see 90 kilometers-
Lex Fridman
(01:37:06)
And humans did it.
Graham Hancock
(01:37:07)
Yeah.
Lex Fridman
(01:37:09)
I love humans so much.
Graham Hancock
(01:37:09)
Again, it’s that urge to explore. And I suggest that it probably began with a few pioneers who made the journey there and back. They ventured into the water. They definitely had boats. And lo and behold, after a two- or three-day voyage, they ended up on a coastline. You’re an individual. You’ve got by relatively straightforward island- hopping, where each island is within sight of each other as far as Timor. And when you get to Timor, suddenly you can’t island hop anymore. There’s an expansive ocean that you can’t see across. But that urge to explore, that curiosity, that is central to the human condition would undoubtedly have led some adventurous individuals to want to find out more and even be willing to risk their lives. And that first reconnoitering of what lay beyond that strait would’ve undoubtedly been undertaken by very few individuals. Not enough to create a permanent population in Australia, but when they came back with the good news that there’s a whole land there, that’s the land that geographers call Sahul, which just as Sunda was the Ice age Indonesian and Malaysian Peninsula all joined together into one landmass.

(01:38:25)
So Sahul was New Guinea joined to Australia. So they would’ve made landfall in New Guinea. And then they think, “Well, here is this vast open, incredible land. We need to bring more people here.” And that would’ve involved larger craft. You need to bring people with resources and you need to bring enough of them, both men and women in order to produce a population that will not rapidly become extinct. And it’s the same in Cyprus. There the work that’s been done suggests very strongly that we’re looking at planned migrations of groups of people in excess of 1,000 at a time, bringing animals with them. And this certainly would’ve involved multiple boats and boats of a significant size.
Lex Fridman
(01:39:15)
And there’s no archaeological evidence of those boats?
Graham Hancock
(01:39:18)
None whatsoever. The oldest boat that’s ever been found in the world is the Dokos shipwreck off Greece, which is around 5,000 years old if, I recall correctly.
Lex Fridman
(01:39:26)
So everything that makes a boat is lost at the time?
Graham Hancock
(01:39:30)
Yes. Boats can be preserved under certain circumstances. There’s a wreck at the bottom of the Black Sea, almost two miles deep. I didn’t know the Black Sea was that deep. But there’s a wreck and there’s no oxygen down there that is more than 2000 years old and is still in pretty much perfect condition. But in other conditions, the structure of the ship evaporates. Sometimes what you’re left with is the cargo of the ship. And you could say there was a ship that sank here, but the ship itself has gone. The fact is we know that our ancestors were seafarers as much as 50,000 years ago. And no ship has survived to testify to that, yet we accept that they were.
Lex Fridman
(01:40:10)
Do you think you one day we’ll find a ship that’s 10, 20, 30, 40, 50,000 years old?
Graham Hancock
(01:40:17)
It’s not impossible. I think it’s quite unlikely, given the very thin survival of ships the further back you go in time, with the oldest, as I say, being about 6,000 years old now. And then the other thing to take into account is the Younger Dryas event itself and the cataclysmic circumstances of that event. And the roiling of the seas that would’ve taken place then, how much would’ve survived in a boat accident at that time, would’ve survived for thousands of years afterwards, I’m not sure. But I don’t give up hope, it’s possible.
Lex Fridman
(01:40:55)
Okay. So that’s back to the three million shipwrecks.
Graham Hancock
(01:40:59)
Yeah.
Lex Fridman
(01:40:59)
So what’s your takeaway from that debate?
Graham Hancock
(01:41:01)
Well, my takeaway from that debate is that I should have been better prepared and I should have been less angry. I have to say that Flint had really disturbed me with these constant snide, not quite exact, references to racism and white supremacism in my work. I detest such things, and to have those labels stuck on me … He’s always avoided taking direct responsibility, pretty much always avoided. There’s one example that I include in the video I’ve made, where he really hasn’t successfully avoided it. But in most cases he’s trying to say that I rely on sources that were racist, but that he’s not saying that I myself am a racist.

(01:41:48)
But the end result of those statements is that people all around the world came to the conclusion that Graham Hancock is a racist and a white supremacist. And that really got under my skin and it really upset me. And I felt angry about it and I felt that I was there to defend Ancient Apocalypse, season one, whereas in fact, what I was there to do was to listen to a series of lectures where an archaeologist tells me what archaeologists have found. And that somehow I’m to deduce that from what they have found, they’re not going to find anything else. At least not anything to do with the lost civilization.
Lex Fridman
(01:42:23)
Listen, I feel you. I’ve seen the intensity of the attacks and the whole racism label is the one that can get under your skin. And it’s a toolbox that’s been prevalent over the past, let’s say decade, maybe a little bit more, as a method of cancellation. When a person is the opposite of racist, very often it’s hilarious to watch. But it can get under your skin, especially when you have certain dynamics that happen on the internet, where it seeps into a Wikipedia page and then other people read that Wikipedia page and you get to hear it from friends, “Oh, I didn’t know you’re … ” whatever. And you realize that Wikipedia description of who you are is actually has a lot of power, not by people that know you well, but people that just are learning about you for the first time-
Graham Hancock
(01:43:12)
Definitely.
Lex Fridman
(01:43:12)
And they can really start to annoy you and get onto your skin, when people are indirectly injecting … They’re writing articles about you. They can then be cited by Wikipedia. It can really bother a person who’s actually trying to do good science, or just trying to inspire people with different ideas.
Graham Hancock
(01:43:30)
I felt that my work was being deliberately misrepresented and I felt that I, as a human being, was being insulted and wronged in ways that are deeply hurtful. My wife and I have six children between us and we have nine grandchildren. And of those nine grandchildren, seven are of mixed race. And this is my family, and these are kids who are going to grow up and read Wikipedia and learn from reading Wikipedia that Grandpa was some kind of racist. This is a personal issue for me, and I’m afraid I carried that personal anger into the debate and it made me less effective than I should have been. But ultimately I do want to pay tribute to Flint. He is an excellent debater. He’s got a very sharp mind. He’s a very clever man and he’s very fast on his feet. And I recognize that.

(01:44:22)
I was definitely up against a superior debater in that debate. I’m not sure that I have those debating skills and I certainly didn’t have them on that particular day. I also admire about Flint something else, which is that he was willing to be there. Most archaeologists don’t want to talk to me at all. They want to insult me from the sidelines. They want to make sure that Wikipedia keeps on calling me a pseudo-archaeologist, or a purveyor of pseudo-archaeological theories. They want to make sure that the hints of racism are there, but they actually don’t want to sit down and confront me.

(01:44:54)
At least Flint was willing to do that and I’m grateful to him for that. And I think in that sense it is an important encounter between people with, let’s say, an alternative view of history and those with the very much mainstream view of history that archaeology gives us. And he’s also a very determined character. He doesn’t give up. So all of those things about him I admire and respect. But, I think he fought dirty during the debate, and I’ve said exactly why in this video that I now have up on YouTube.
Lex Fridman
(01:45:26)
To say a positive thing that I enjoyed, I think towards the end and him speaking about agriculture was pretty interesting. So the techniques of archaeology are pretty interesting, where you can get some insights through the fog of time about what people were doing, how they were living. That’s pretty interesting.
Graham Hancock
(01:45:47)
It’s very interesting. It’s a very important discipline. And I’ve said many times before, publicly, I couldn’t do any of my work without the work that archaeologists do. I emphasize very strongly in this video that I don’t study what archaeologists study. But nevertheless, the data that archaeologists have generated over the last century or so has been incredibly valuable to me in the work that I do. But, when I look at the Great Sphinx and the studies of archaeology saying that this is the work of the pharaoh Khafre, despite the absence of any single contemporary inscription that describes it to Khafre, and in fact the presence of other inscriptions that say that it was already there in the time of Khufu, I am not looking at what egyptologists study. They just dismiss all of that and lock into the Khafre connection.

(01:46:35)
At Gobekli Tepe, I’m not really looking at what archaeologists look at, I’m looking at the alignments of the megaliths and how they seem to track precession of the star Sirius over a period of time. Archaeologists aren’t interested in any of that. So I value and respect archeology. I think it’s an incredible tool for investigating our past, but I wish archaeologists would bring a slightly gentler frame of mind to it and a slightly opener perspective. And also that archaeologists would be willing to trust the general public to make up their own minds. It’s as though certain archaeologists are afraid of the public being presented with an alternative point of view, which they regard as quote, unquote, “dangerous,” because they somehow underestimate the intelligence of the general public and think the general public are just going to accept that.

(01:47:24)
Actually by condemning those alternative point of view, archaeologists make it much more likely that the general public will accept those alternative point of view, because there is a great distrust of experts in our society today. And behaving in a snobbish arrogant way, we archaeologists are the only people who are really qualified to speak about the past and anybody else who speaks about the past is dangerous. That actually is not helpful to archaeology in the long term. There could be a much more positive and a much more cooperative relationship. And I can see that relationship with a gentleman like Ed Barnhart. Was very much the case with archaeologist Martti Parssinen from the University of Helsinki and with geographer Alcio Arranzi, Brazilian geographer. Very, very senior figure who I worked with in the Amazon for season two of Ancient Apocalypse, looking at these astonishing earthworks that have emerged from the Amazon jungle and which more and more are now being found with LiDAR. Indeed, we found some of them ourselves with LiDAR while we were there.
Lex Fridman
(01:48:26)
Yeah. That was an incredible part of the show that I got a chance to preview. It’s like there’s all this earthworks. Yeah. The traces of things built on the ground that probably you can only really appreciate when you look from up above.
Graham Hancock
(01:48:44)
That’s right.
Lex Fridman
(01:48:44)
So the idea that they built stuff that you can only appreciate when viewed from up above means they had a very deep relationship with the sky.
Graham Hancock
(01:48:55)
With the sky. And a very good knowledge of geometry as well, because these are geometrical structures and some of them even seem to incorporate geometrical games, almost like squaring the circle. It’s not quite that, but you have a lovely square earthwork with a lovely circle earthwork right in the middle of it. Whatever else they were, they were geometers. They were not just builders of fantastically huge earthworks that nobody expected in the Amazon. Not just builders of cities that we now know existed in the Amazon. But, that they were astronomers and mathematicians as well.

Panspermia

Lex Fridman
(01:49:32)
Everything we’re talking about is so full of mystery. It’s just fascinating, especially the farther back we go.
Graham Hancock
(01:49:36)
That’s what I love about the past, is the mystery that’s there. And that’s another thing that I regret about some archeologists is that their mission seems to drain all mystery out of the past, to suck it dry like some vampire sucking the blood out of the past and to reduce it to a series of numbers that appear to be scientific. I think that’s most unfortunate. The past is deeply mysterious. The whole story of life on earth is deeply mysterious. We were talking about the timeline of human beings, but if you go back to the formation of the earth itself, if I’ve got the figures right, it’s about four-and-a-half billion years ago that the Earth supposedly formed. It was then incredibly hot and inhospitable to life for the next several hundred million years.

(01:50:29)
But it was actually Francis Crick who pointed out something odd, that within 100 million years of the earth being cool enough to support life, there’s bacterial life all over the planet. And Crick wrote a book called Life Itself that was published in 1981, and he suggested that life had been brought here by a process of panspermia. Now that’s an idea that’s around in circulation that comets may carry bacteria, which can seed life on planets. But, Crick actually in Life Itself was talking about directed panspermia. He envisaged … This is Crick, not me. He envisaged an alien civilization far away across the galaxy, which faced extinction. Perhaps a supernova was going to go off in the neighborhood.

(01:51:22)
They were highly advanced. Their first thought it might’ve been, “Let’s get ourselves off the planet and go and populate some other planet,” but the distances of interstellar space were so great. So their second thought was, “Let’s preserve our DNA. Let’s put genetically engineered bacteria into cryogenic chambers and fire them off into the universe in all directions.” And bottom line of Crick’s theory in Life Itself is one of those cryogenic containers containing bacterial life from another solar system crashed into the early Earth. And that’s why life began so suddenly here on Earth.
Lex Fridman
(01:51:58)
If we as a human civilization continue, I think that is a one way to create backups of us elsewhere in the universe, given the space is to do a life gun and shoot it everywhere and it just plants. And you hope that whatever is the magic that makes up human consciousness … And if that magic was already there in the initial DNA of the bacteria-
Graham Hancock
(01:52:27)
The potential for that magic is there.
Lex Fridman
(01:52:29)
The potential is there.
Graham Hancock
(01:52:30)
And evolutionary forces will work upon it in different ways in different environments. But the potential is there. Yes. It’s something that we would do. If we were facing a complete extinction of life on planet Earth, a major global effort would be made to preserve it somehow. And that might well include firing off cryogenic chambers into the universe and hoping that some of them would land somewhere hospitable.
Lex Fridman
(01:52:56)
And as you were mentioning, there’s just so many interesting mysteries along the way here. For example, I think like three billion years it was single-cell organisms. So it seems like life was pretty good for single-cell organisms, that there was no need for multicellularity that for animals, for any of this kind of stuff. So why is that? It seems like you could adapt much better if you are a more complicated organism. It took a really long time to take that leap. Is it because it’s really hard to do? And what was the forcing function to do that kind of leap?

(01:53:33)
And the same. For us to be selfish and self-obsessed for us humans, what was the magic leap to Homo sapiens from the other hominids? And why did Homo sapiens win out against the Neanderthals and the other competitors? Why are they not around anymore? So those are all fascinating mysteries and it feels like the more we propose radical ideas about our past and take it seriously and explore the more we’ll be able to figure out that puzzle that leads all the way back to Homo sapiens and maybe all the way back to the origin of life on Earth.
Graham Hancock
(01:54:13)
Yeah. Yeah. I think that Homo sapiens is the tail end of a very long, deep series of mysteries that goes back right to the beginning of life on this planet. And probably long before actually, because this planet is part of the universe. And God knows what else is out there in the universe.
Lex Fridman
(01:54:31)
Why do you think Homo sapiens evolved? What was the magic thing? There’s a bunch of theories about fire leading to meat, to cooking, which can fuel the brain. That’s one. The other is social interaction. We’re able to use our imagination to construct ideas and share those ideas and tell great stories and that is somehow an evolutionary advantage. Do you have any favorite conceptions of-
Graham Hancock
(01:54:57)
Well, it’s interesting. There’s no doubt that anatomically modern humans and Neanderthals coexisted in Europe for at least 10,000 years, probably more than that. And yet one of the popular views is that anatomically modern humans wiped out the Neanderthals, that we killed them off. But, at the same time we were into breeding with the Neanderthals. In a sense, the Neanderthals are not gone. They’re still within us today. We are part Neanderthal. There’s another theory that I’ve read about. There is some evidence that Neanderthals were cannibals, that there was ritual cannibalism took place amongst Neanderthals and particularly the eating of human brains. And this can cause Kuru, which can kill off whole populations. That’s another suggestion of why the Neanderthals died out.

(01:55:50)
There’s lots of possibilities that have been put forward. Maybe we just out-competed them. Maybe anatomically modern humans had some brain connections that they didn’t have. Even though the Neanderthal brain was bigger than the brain of anatomically modern human beings, as the old saying goes, size isn’t everything. Maybe we just had a more compact, more efficient brain. The fact of the matter is that Neanderthals and Denisovans did not survive the rise of Homo sapiens.
Lex Fridman
(01:56:21)
For our discussion, though, what is interesting is all the hominids seem to be explorers.
Graham Hancock
(01:56:26)
Yes.
Lex Fridman
(01:56:26)
They spread. I didn’t know this.
Graham Hancock
(01:56:28)
The fact that Homo erectus was all over the planet more than a million years ago is testament to that. And I do think that exploration urge is fundamental to humanity. And I would like to say that’s what I think I’m doing. I’m exercising my urge to explore the past in my own way, making my own path and defining my own route.

Shamanism

Lex Fridman
(01:56:53)
That’s the leap from non-human to human. One of the things you’ve discussed is your idea of what was the leap to human civilization? What is the driver? What is the inspiration for humans to form civilizations? And for you, that’s shamanism.
Graham Hancock
(01:57:12)
Definitely.
Lex Fridman
(01:57:12)
Can you explain what that means?
Graham Hancock
(01:57:14)
I think that shamanism is the origin of everything of value in humanity. I think it was the earliest form of science. When I spend time with shamans in the Amazon, I observe people who are constantly experimenting with plants in a very scientific way. They’re always trying a pinch of this and a pinch of that in different forms, for example, of the ayahuasca brew, to see if it enhances it or makes it different in any way. The invention of curare is a remarkable scientific feat, which is entirely down to shamans in the Amazon. They are the scientists of the hunter-forager state of society and they were the ancient leaders of human civilization.

(01:58:09)
So I think all civilization arises out of shamanism. And shamanism is a naturally scientific endeavor, where experimentation is undertaken an exploration and investigation of the environment around us. And what I’m suggesting is that one group, perhaps more than one group, went a bit further than other groups did, and used that study of the skies and developed navigational techniques and we’re able to sail and explore the Earth. But that ultimately what lies behind it is the same curiosity and investigative skill that shamans are still using in the Amazon to this day. And I do see them as scientists in a very proper use of the word.
Lex Fridman
(01:58:56)
But do you think something like ayahuasca was a part of that process?
Graham Hancock
(01:59:02)
Yes. Ayahuasca is the result of shamanistic investigation of what’s available in the Amazon. Of course, ayahuasca is all the fad in Western industrialized societies today. And some people see it as a miracle cure for all kinds of ailments and problems. And perhaps it is, perhaps it can be in certain ways. The ayahuasca itself is not an Amazonian word. It comes from the Quechuan language and it means the vine of souls or the vine of the dead. But the ayahuasca vine is only one of two principle ingredients in the ayahuasca brew. And the other ingredient are leaves that contain dimethyltryptamine. And there are two sources of that. One is a bush called Psychotria viridis, that’s its botanical name. They call it Chacruna in the Amazon. And its leaves are rich in dimethyltryptamine DMT, which is arguably the most powerful psychedelic known to science. And the other source comes from another vine, Diplopterys cabrerana, which the leaves of that vine also contain DMT. So the ayahuasca vine on its own is not going to give you a visionary journey. And the leaves that contain DMT on their own, whether they come from Diplopterys or whether they come from Chacruna, are not going to give you a visionary journey. And the reason they’re not going to give you the visionary journey, is because of the enzyme monoamine oxidase in the gut that shuts down DMT when absorbed orally. Basically, DMT is not accessible orally, unless you combine it with a monoamine oxidase inhibitor. And that’s what I mean when I’m talking about science in the Amazon, because there’s so many tens of thousands, hundreds of thousands different species of plants and trees in the Amazon. And they’ve gone around and they’ve found just two or three of them that put together can produce these extraordinary visionary experiences.
Lex Fridman
(02:00:59)
Just imagine the number of plants they had to have eaten, consumed and smoked or all kinds of combinations to arrive at that.
Graham Hancock
(02:01:07)
Exactly. Exactly. To realize that this is something very special. And then to use the principles there to find another form of it. So ayahuasca is the form that is made with the ayahuasca vine and the leaves of the Chacruna plant. But Yage is made from the ayahuasca vine and the leaves of another vine the ploparis caapiano, which contain not only, which is the DMT that everybody’s pretty much familiar with these days, but also 5-MeO-DMT. And the Yage experience, which I have also had, in my view is more intense and more powerful almost to the point of being overwhelming than the ayahuasca experience. But what the result of this sophisticated chemistry that we find taking place here is a brew which is hideous to drink. The taste, I find it quite repulsive. I almost retched just smelling it in the cup.

(02:02:15)
But then unleashes these extraordinary experiences. And it isn’t just pretty visuals. It’s the sense of encounters with sentient others, that there are sentient beings, that somehow we are surrounded by a realm of sentience that is not normally accessible to us. And that what the ayahuasca brew and certain other psychedelics, like some psilocybin mushrooms in a high enough dose can do it as well. LSD can do it. But Ayahuasca is the master in this of lowering the veil to what appears to be a seamlessly convincing other realm, other world. And of course the hard line, rational scientists will say that’s just all fantasies of your brain. But I don’t think we fully understand,

(02:03:03)
Or even close to understanding exactly what consciousness is. And I remain open to two possibilities that consciousness is generated by the brain, is made by the brain in the way that a factory makes cars. But I also am open to the possibility that the brain is a receiver of consciousness, just as a television set is the receiver of television signals. And that if that is the case, then we locked into the physical realm. We need our everyday alert, problem-solving state of consciousness, and that’s the state of consciousness that western civilization values and highly encourages. But these other states of consciousness that allow us to access alternative realities are possibly more important. It may be apocryphal, but it was reported after Francis Crick’s role-
Graham Hancock
(02:04:00)
But it was reported after Francis Crick’s role and his Nobel Prize for the discovery of the double helix that he finally got it under the influence of LSD. There’s the classic example of Kary Mullis and the polymerase chain reaction. He said he got that under the influence of LSD. So the notion that the alert problem-solving state of consciousness is the only valuable state of consciousness is disproved by valuable experiences that people have had in a visionary state. But the question that remains unresolved is those entities that we encounter, and not everybody encounters them, and you’re certainly not going to encounter them on every ayahuasca trip. There are ayahuasca journeys where nothing seems to happen. I suspect something does happen, but it happens at a subconscious level. I know that shamans in the Amazon regard those trips where actually you don’t see visions as amongst the most valuable, and they say you are learning stuff that you’re not remembering, but you’re learning it anyway.

(02:05:02)
These sentient others that are encountered, what are they? Are they just figments of our brain on drugs or are we actually gaining access to a parallel reality, which is inhabited by consciousness which is in a non-physical form? And I’m equally open to that idea. I think that may be what is going on here with ayahuasca.

(02:05:25)
But the other thing is that there is a presence within the ayahuasca brew, and she is present both in ayahuasca and in yachay. And that’s one of the reasons why the shamans say that actually the master of the process is the ayahuasca vine, not the leaves. It’s as though the vine has harnessed the leaves to gain access to human consciousness. And there, if you have sufficient exposure to ayahuasca or yachay, you drink it enough times, I’ve had maybe 75 or 80 journeys with ayahuasca, you definitely start to feel an intelligent presence with a definite personality, which I interpret as feminine, and which most people in the West interpret it as feminine and they call her Mother Ayahuasca. There are some tribes in the Amazon who interpret the spirit of ayahuasca as male, but in all cases, that spirit is seen as a teacher. That’s fundamentally what ayahuasca is. It’s a teacher. And it teaches moral lessons.

(02:06:28)
And that’s fascinating, that a mixture of two plants should cause us to reflect on our own behavior and how it may have hurt and damaged and affected others and fill us with a powerful wish not to repeat that negative behavior again in the future. The more baggage you carry in your life, the harder the beating the ayahuasca is going to give you, until it forces you to confront and take responsibility for your own behavior. And that is an extraordinary thing to come from a plant brew in that way.

(02:07:02)
And I think yes, I think ayahuasca is the most powerful of all the plant medicines for accessing these mysterious realms. But there’s no doubt you can access them. They’re all tryptamines. They’re all related to one another in one way. You can access them through LSD and you certainly can access them through psilocyb mushrooms as well in large enough dose.
Lex Fridman
(02:07:24)
Both possibilities, as you describe, are interesting. And to me, they’re kind of akin to each other. I wonder what the limit of the brain’s capacity is to create imaginary worlds and treat them seriously and make them real, and in those worlds, explore and have real moral, deep brainstorming sessions with those entities. So it’s almost like the power of the human mind to imagine taken to its limit.
Graham Hancock
(02:08:01)
It is. And the curious thing is that the same iconography… People paint their visions after ayahuasca sessions. People were painting in Europe in the cave of Lascaux, for example, and of course they had access to psilocyb mushrooms in prehistoric Europe. There’s a remarkable commonality in the imagery that is painted.

(02:08:26)
I like to give credit where credit is due, and there are two names that need to be mentioned here. One is the late, great Terence McKenna and his book Food of the Gods, where he proposed the idea very strongly that it was our ancestral encounters with psychedelics that made us fully human. That’s what switched on the modern human mind.

(02:08:47)
And very much the same idea began to be explored a bit earlier by Professor David Lewis-Williams at the University of Witwatersrand in South Africa, fabulous book called The Mind in the Cave, where he is again arguing that these astonishing similarities in cave art and rock art all around the world can only be properly explained by people in deeply altered states of consciousness attempting to remember, when they return to a normal everyday state of consciousness, attempting to remember their visions and document them on permanent media like the wall of a cave.

(02:09:22)
So, typically you get a lot of geometric patterns, but you also got entities. And those entities often are therianthropes, part animal, part human in form. Might have the head of a wolf and the body of a human being, might have the head of a bird and the body of a human being, and so on and so forth. And that they communicate with us in the visionary state.

(02:09:45)
Interestingly, although this sounds like woo-woo, and it is an area that most scientists would steer clear of at risk of their careers, there is very serious work now being done at Imperial College in London and at the University of California at San Diego, where volunteers are being given extended DMT. There’s a new technology, DMTx, where the DMT is fed directly into the bloodstream by drip, and it’s possible to keep the individual in the peak DMT state. Which normally when you smoke or vape DMT, you’re looking, if you’re lucky, at 10 minutes, or if you’re unlucky, if it’s a bad journey, because those 10 minutes can seem like forever. But with DMTx, with the drip-feeding of DMT into the bloodstream, these volunteers actually could be kept in the peak state for hours.

(02:10:40)
And unlike LSD where you rapidly build up tolerance, nobody ever builds up tolerance to DMT. It always hits you with the same power. Even if you took it yesterday and the day before and you’re taking it tomorrow as well, it’s still going to have that same power. There’s no tolerance there. So that’s how they can use that lack of tolerance to keep volunteers in this state.

(02:10:59)
And then when they debrief those volunteers… They’re also putting them in MRI scanners and looking at what’s happening in the brain. But when they debrief them, they’re all talking about encounters with sentient others. There’s even a group now called Sentient Others, where volunteers are now exchanging their experiences. They weren’t allowed to do so at the beginning of the experiment, but now that most of them have left it, they’re exchanging their experiences, and it’s all about encounters with sentient others who wish to teach them moral lessons.

(02:11:28)
Now, to me, that’s wild. What is going on here? How do we account for this? Yeah, I get the notion of hallucinations and brightly colored visuals, but the moral lessons that come with it, those are very odd.
Lex Fridman
(02:11:43)
Yeah. And would you say that the reason that could give birth to a civilization, is it because such visions can help create myths, and especially religious myths, that would be a cohesive thing for a large group of people to get around?
Graham Hancock
(02:12:02)
Yes. And can help us to be better members of our own community.
Lex Fridman
(02:12:05)
Right, with moral lessons.
Graham Hancock
(02:12:06)
Yeah. More contributing members of our community. More caring, more nurturing members of our community. That’s got to be good for any community. I’ve said this a dozen times, but I’ll say it again. If I had the power to do so, I would make it a law, an absolute law, that anybody running for a powerful political position, particularly if that position is president or head of state in any kind of way, that that person has to undergo the ayahuasca ordeal first. They have to have 10 or 12 sessions of ayahuasca as a condition for applying for the job. I suspect that most who had had those experiences wouldn’t want to apply for the job anymore. They would want to live a different kind of life. And those who did want to carry on being a leader of a nation would be very different people from the people who are leading the nations of the earth into chaos and destruction today.
Lex Fridman
(02:13:07)
Yeah, they would be doing it for the right reasons. I mentioned to you, I recently interviewed Donald Trump, and I actually brought up this same idea that it would be a much better world if most of Congress and most politicians would take some form of psychedelic, at the very least.
Graham Hancock
(02:13:21)
Yeah. I have no doubt that it would be a better world. I mean, this raises an interesting point, which is the role of government in controlling our consciousness. And in my opinion, the so-called War on Drugs is one of the fundamental abuses of human rights that have been undertaken in the past 60 years. It should be a Republican issue. If I understand the Republican Party correctly, the Republican Party believes in individual freedom for adults as much as possible, and particularly the freedom to make choices over their own bodies.

(02:13:55)
But in the case of even cannabis, I know, this is one of the great things that’s happening in America. It’s happening state by state where cannabis is being legalized and that draconian hand of government is being taken off the back of people who are consuming a medicine that is far less harmful than alcohol, which is glorified in our society.

(02:14:19)
We cannot say that we are free if we allow our government to dictate to us what experiences we may or may not have in our inner consciousness, while doing no harm to others. And the point there is we already have a whole raft of laws that deal with us when we do harm to others. Do we really need laws that tell us what we may or may not experience in the inner sanctum of our own consciousness? I think it’s a fundamental violation of adult sovereignty. And we would have much less drug problems if these drugs were all legalized and made available to people without shaming them, without punishing them in any way, but just part of normal social life. And then you could be sure that you were getting good product rather than really shitty product, which has been cut with all sorts of other things.

(02:15:10)
Ultimately, the way forward is for adults to take responsibility for their own behavior, and for society to allow that to happen, and not to have big government taking responsibility for decisions that should be in the hands of individuals.
Lex Fridman
(02:15:24)
And for me also, it’s exciting. Some of these substances like psilocybin are being integrated into scientific studies in large scales. It’s really interesting.
Graham Hancock
(02:15:33)
We’ve seen a revolution in the way science looks at psychedelics in the last 20, 25 years. They were in that highly demonized category. But again, it’s one of those paradigms which gets overwhelmed by new evidence, and it began to be realized that psilocybin and other psychedelics are very helpful in a range of conditions from which people suffer. Post-traumatic stress disorder. The fear of death when you’re suffering from terminal cancer can be overwhelming, and it’s been found that psilocybin can remove that. Deep depressions can be evaporated with one single massive psilocybin journey. They just go away. There’s really good science on this. And they are being integrated into conventional medicine more and more. We’ll see it happening. I’m not sure if it’ll happen as fast as I would like to see it happen in my lifetime, but it is going to happen.
Lex Fridman
(02:16:29)
Yeah, I actually just recently found out that you had a TED Talk, War on Consciousness, that was taken down, and that was just part of just the general resistance. Because it was a pretty… It wasn’t radical. It wasn’t really a radical-
Graham Hancock
(02:16:45)
I was talking about ayahuasca and I was talking about the view that I hold very strongly that as long as we do no harm to others, sovereign adults should be allowed to make decisions about their own bodies and not face a jail sentence or shaming as the result. So it was a TEDx Talk, not a TED Talk, organized by a local TED group. They called them TEDx Talks. And I gave this talk about the war on consciousness, and it was immediately pulled down from TED’s main channel with all kinds of bizarre reasons being given. But unfortunately, it was too late because a number of people had already downloaded the talk and then uploaded it onto other YouTube channels. And actually, their banning of it made it go viral in a way that would not have happened otherwise. But again, it’s a sign that points of view that are not acceptable to those in positions of power are simply dismissed and shut down, or at least attempts are made to do so.
Lex Fridman
(02:17:43)
In general, just along that line of thinking, I’m pretty sure that what we understand about consciousness today will seem silly to humans from a hundred years from now.
Graham Hancock
(02:17:53)
You bet it will. Especially if we harness psychedelics to investigate consciousness. And that is what is happening at Imperial College right now is the investigation of the experience. They’re not looking… There are other trials that are looking for the therapeutic potential of DMT, but in this case, they’re looking entirely at the experiences that people have and why they’re so similar from people from different age groups and different genders and different parts of the world, they’re all having the same experiences.
Lex Fridman
(02:18:23)
And for me, from an engineer perspective, it’s interesting if it’s possible to engineer consciousness in artificial beings. It’s another way to approach the question of how special is human consciousness. From where does it arise? Is it something that permeates all of life? And then in that case, what is the thing that makes life special? What is life? What is these living organisms that we have here that evolve to create humans? And what is truly special about humans? It’s both scary and exciting to consider the possibility that we can create something like this.
Graham Hancock
(02:19:02)
But why not? We are a vehicle for consciousness, in my view. I think consciousness is present in all life on earth. I don’t think it’s limited to human beings. We have the equipment to manifest and express that consciousness in the way that a dog, for example, doesn’t have or a snail doesn’t have or a pigeon doesn’t have. But when I look at two pigeons sitting on my garden fence and rubbing up close to each other and enjoying each other’s company and taking off together and hanging out together, I think they’re conscious beings. And I think consciousness is everywhere. I think it’s the basis of everything. And I suspect that fundamentally, consciousness is non-physical, and that it can manifest in physical forms where it can then have experiences that would not be available in the non-physical state. That’s a guess.
Lex Fridman
(02:19:52)
That’d be a fascinating… Because then you can construct all kinds of physical forms to manifest the consciousness.
Graham Hancock
(02:19:57)
Yeah. And see if consciousness enters, if they become conscious. Isn’t there some suggestion that artificial intelligence is already becoming conscious?
Lex Fridman
(02:20:04)
That makes humans really uncomfortable, because we are at the top of the food chain, we consider ourselves truly special, and to consider that there’s other things that could be special is scary.
Graham Hancock
(02:20:16)
Well, look how other people make us uncomfortable too. I mean, look at the state of the world today. All the conflicts that are raging. That’s because we’re afraid. When I say we, I’m speaking nation by nation, we are afraid of other people. We fear that they’re going to hurt us or damage us in some way. And so we seek to stop that. It’s the root of many, many conflicts, this fear. And so fear of AI may not be such a good idea after all. It might be very interesting to go down that route and see where it comes. Certainly in terms of exploring consciousness, it is very interesting.
Lex Fridman
(02:20:50)
Yeah, fear is a useful thing, but it can also be destructive.
Graham Hancock
(02:20:54)
Well, it can be destructive and it can shut you down completely.

How the Great Pyramid was built

Lex Fridman
(02:20:58)
If you look into the future, maybe the next a hundred years, what do you hope are the interesting discoveries in archeology that we’ll find?
Graham Hancock
(02:21:06)
Well, I’d really like to know how the Great Pyramid was built. And we now have, with new tech, with scanning technology, it’s now become apparent that there are many major voids within the Great Pyramid. Right above the Grand Gallery, there’s what looks like a second Grand Gallery that has been identified with remote scanning. And new chambers, one of them has even been opened up already, are being found as a result of this. So it may be that the Great Pyramid will ultimately give up its secrets.

(02:21:39)
I often think that the Great Pyramid is partly designed to do that. It’s designed to invite its own initiates. Some people aren’t interested in the Great Pyramid at all, but some people are fascinated by it and they’re drawn towards it. And when they’re drawn towards it, it immediately starts raising questions in their minds, and they seek answers to their questions.

(02:21:59)
So it’s like saying, ” Here I stand. Investigate me. Find out about me. Figure out what I am. Why have I got these two shafts cut into the side of the so-called Queen’s Chamber?” Why do they slope up through the body of the Great Pyramid? Why do they not exit on the outside of the Great Pyramid? Why, when we send a robot up those shafts, do we find them after about 160 feet blocked by a door with metal handles. Why when we drill through that door to see what’s beyond it, three or four feet away, we see another door. It’s very frustrating. But it’s saying to us, “Keep on exploring. If you’re persistent enough, we’ll eventually give you the answer.”

(02:22:40)
So I’m hoping that that answer will come as to how this most mysterious of monuments was actually built and the inspiration that lay behind it. Certainly, I’m sure it was never a tomb, or a tomb only. The later pyramids might’ve been. Actually no pharaonic burial has been discovered in any pyramid. But nevertheless, it’s pretty clear that the later pyramids with the pyramid texts written on the walls, like the pyramid of Unas, Fifth Dynasty pyramid at Saqqara, were tombs.

(02:23:13)
But the Great Pyramid, to go to that length to create a tomb, to make it a scale model of the earth, to orient it perfectly to true north, to make it 6 million tons. This is not a tomb. This is something else. This is a curiosity device. This is something that is asking us to understand it. And I hope we will understand it. And I hope Egyptologists will be willing to set aside that prejudice that they’re only looking at a tomb and consider other possibilities. And as new tech is revealing these previously unknown inner spaces within the Great Pyramid, I think that’s going to become more and more likely.
Lex Fridman
(02:23:48)
So not just the how it was built, but the why.
Graham Hancock
(02:23:50)
But the why.
Lex Fridman
(02:23:52)
And to you, it seems obvious that there would be a cosmic motivation.
Graham Hancock
(02:23:57)
Yeah, very, very much so. As above, so below. Which is an idea in the Hermetica. The God Hermes for the Greeks was the Greek version of Thoth, the wisdom God of Ancient Egypt. And that’s where that saying comes from. It comes from the Hermetica. But it’s expressing an ancient Egyptian idea, to mirror the perfection of the heavens on earth.
Lex Fridman
(02:24:19)
So you think there’s something interesting to be discovered about the how it was built? You mean beyond the ideas of using ramps and wet sand.
Graham Hancock
(02:24:27)
Yeah. Ramps won’t do it. Ramps won’t do it. Nor will wet sand. It’s true that the ancient Egyptians did haul big objects on sleds on wet sand. There are even reliefs that show the process where an individual is standing on the front of the sledge pouring water down to lubricate the sand underneath. And that’s a perfectly respectable way to move a 200 ton block of stone across sand, flat sand, if you have enough people to pull it. But that is not going to help you get dozens of 70 ton granite blocks 300 feet in the air to form the roof of the King’s Chamber and the floor of the chamber above it, and the roof of that chamber, and the floor of the chamber above that, and so on and so forth. Wet sand never got those objects up there. Somehow they were lifted up there.

(02:25:18)
Now, yeah, ramps are proposed as the solution, but where are the remains of those ramps? If you’re going to carry blocks weighing up to two or three tons right to the top of the Great Pyramid to complete your work, you’re going to need a ramp that’s going to extend out into the desert for more than a mile at a 10 degree slope. And it’s calculated that a 10 degree slope is about the maximum slope that human labor can haul objects up a ramp. And that ramp can’t just be compacted sand, since heavy objects are being hauled up. It’s going to have to be made of very solid material, almost as solid as the pyramid itself. Where is it? We don’t see any trace of those so-called ramps that are supposed to have been involved in the construction of the pyramid. I think we don’t know. I think we have no idea it’s built. That’s why there’s so many different theories. We haven’t got the answer yet. But the how of it is one of the big mysteries from our past.
Lex Fridman
(02:26:12)
I love the Great Pyramids as a kind of puzzle that was created by the ancient peoples to be solved by later peoples. I don’t know if you’re aware of the 10,000-year clock that was built by Jeff Bezos and Danny Hillis in Sierra Diablo mountains in Texas. They’re building a clock that ticks once a year for 10,000 years.
Graham Hancock
(02:26:36)
Oh, wow.
Lex Fridman
(02:26:37)
So it’s talking about… And it’s supposed to sort of run, if there’s a nuclear apocalypse, it just runs.
Graham Hancock
(02:26:44)
It’ll keep running.
Lex Fridman
(02:26:45)
It’s an example of modern humans thinking like, okay, if 10,000 years from now and beyond, if something goes wrong or the future humans that are way different come back and they analyze what happened here, how can we create monuments that they could then analyze, and in that way be curious about. In their curiosity, discover some deep truths about this current time. It’s an interesting kind of notion of what can we build now.
Graham Hancock
(02:27:17)
That would last. And the answer is that the majority of what we build now wouldn’t last.
Lex Fridman
(02:27:17)
It wouldn’t.
Graham Hancock
(02:27:23)
It would be gone within a few thousand years. But what would last is massive megalithic structures like the Great Pyramid. That would last. And it could be used to send a message to the future. I think Göbekli Tepe serves a similar function. I mean, there it was, it was buried 10,400 years ago. And then for the next 10,000 years, nobody touched it. Nobody knew it was there. It took the genius of Klaus Schmidt, the original excavator, to realize what he’d found and what it was. But the great thing about the ceiling of Göbekli Tepe, the deliberate burial of Göbekli Tepe, is it means that no later culture trod over it and imposed their organic materials on it and messed up the dating sequences and so on and so forth, or vandalized it or used it as a quarry. It’s all there intact.

Mortality

Lex Fridman
(02:28:17)
So you mentioned that the pyramids, and some of the other amazing things that humans have built, was the result of us humans struggling with our mortality.
Graham Hancock
(02:28:28)
That’s the ultimate goal. That seems to me what’s at the heart of many pyramids around the world is that they’re connected in one way or another to the notion of death and to the notion of the exploration of the afterlife. And this is of course, the fundamental mystery that all human beings face. We may wish to ignore it, we may wish to pretend that it’s not going to happen, but we are of course, all mortal. Every one of us, all 8 billion or however many of us that are on the planet right now, we’re all going to face death sooner or later. And the question is what happens?

(02:29:06)
And there are a few cultures that really intensely, deeply studied that mystery. We are not one of them. The general view of science, I think, is that we’re accidents of evolution. When we die, the light blinks out. There’s no more of us. There’s no such thing as the soul. But that’s not a proven point. There’s no experiment that proves that’s the case. We know we die, but we don’t know whether there’s such a thing as a soul or not.
Lex Fridman
(02:29:32)
Yeah, it’s the great mystery.
Graham Hancock
(02:29:34)
It’s a great mystery that we all share, and those cultures that have investigated it, and Ancient Egypt is the best example, have investigated it thoroughly and map out the journey that we make after death. But that notion of a journey after death and of hazards and challenges along the way and ultimately of a judgment, that notion is found right around the world, and it even manifests into the three monotheistic faiths that are still present in the world today.
Lex Fridman
(02:30:01)
Well, you’re one such human, and you said you contemplate your own death.
Graham Hancock
(02:30:07)
Yeah.
Lex Fridman
(02:30:08)
Are you afraid of it?
Graham Hancock
(02:30:09)
No. I’m not afraid of death at all. I’m curious about death. I think it could be very interesting. I think it’s the beginning of the next great adventure. So I don’t fear it. And I would like to live as long as my body is healthy enough to make living worthwhile. But I don’t fear death. What I do fear is pain. I do fear the humiliation that old age and the collapse of the faculties can bring. I do fear the cancers that can strike us down and riddle us with pain and agony. That I fear very, very much indeed.

(02:30:46)
But death is going to come to all of us. I accept it. It’s going to come to me. I’m not going to say I’m looking forward to it, but when it happens, I’m going to approach it, I hope, with a sense of curiosity and a sense of adventure, that there’s something beyond this life. It isn’t heaven, it isn’t hell, but there’s something. The soul goes on. I think reincarnation is a very plausible idea. Again, modern science would reject that. But there’s the excellent work of Ian Stevenson, Children Who Remember Past Lives, who found that children up to the age of seven often have memories of past lives.

(02:31:25)
And in cultures where memories of past lives are discouraged, they tend not to express that much. But in cultures where memories of past lives are encouraged, like India, they do express it. And he found several subjects, children under the age of seven in India, who were able to remember specific details of a past life, and he was able to go to the place where that past life unfolded and validate those details. So if consciousness is the basis of everything, if it’s the essence of everything, and consciousness benefits in some way from being incarnated in physical form, then reincarnation makes a lot of sense. All the investment that the universe has put into creating this home for life may have a much bigger purpose than just accident.
Lex Fridman
(02:32:10)
What a beautiful mystery this whole thing is.
Graham Hancock
(02:32:12)
Yeah. We are immersed in mystery. We live in the midst of mystery. We’re surrounded by mystery. And if we pretend otherwise, we’re deluding ourselves.
Lex Fridman
(02:32:19)
And Graham, thank you so much for inspiring the world to explore that mystery. Thank you for talking today.
Graham Hancock
(02:32:24)
Thank you, Lex. It’s been a pleasure.
Lex Fridman
(02:32:27)
Thanks for listening to this conversation with Graham Hancock. To support this podcast, please check out our sponsors in the description.

(02:32:34)
And now let me leave you with some words from Charles Darwin. “It is not the strongest of the species that survives, nor the most intelligent. It is the one that is the most adaptable to change.” Thank you for listening and hope to see you next time.

Transcript for Jordan Peterson: Nietzsche, Hitler, God, Psychopathy, Suffering & Meaning | Lex Fridman Podcast #448

This is a transcript of Lex Fridman Podcast #448 with Jordan Peterson.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Lex Fridman
(00:00:00)
The following is a conversation with Jordan Peterson. His second time on this, The Lex Fridman Podcast.

Nietzsche

Lex Fridman
(00:00:08)
You have given a set of lectures on Nietzsche as part of the new Peterson Academy, and the lectures were powerful. There’s some element of the contradictions, the tensions, the drama, the way you like, lock in on an idea, but then are struggling with that idea, all of that, that feels like it’s a Nietzschean.
Jordan Peterson
(00:00:26)
Well, he’s a big influence on me stylistically and in terms of the way I approached writing, and also many of the people that were other influences of mine were very influenced by him. So I was blown away when I first came across his writings. They’re so intellectually dense that I don’t know if there’s anything that approximates that. Dostoevsky maybe, although he’s much more wordy. Nietzsche is very succinct partly he was so ill because he would think all day he couldn’t spend a lot of time writing. And he condenses writings into very short while this Aphoristic style he had, and it’s really something to strive for. And then he’s also an exciting writer like Dostoevsky and dynamic and romantic in that emotional way. And so it’s really something, and I really enjoyed doing that. I did that lecture that you described, that lecture series is on the first half of Beyond Good and Evil, which is a stunning book. And that was really fun to take pieces of it and then to describe what they mean and how they’ve echoed across the decades since he wrote them. And yeah, it’s been great.
Lex Fridman
(00:01:40)
Taking each sentence seriously and deconstructing it and really struggling with it. I think underpinning that approach to writing requires deep respect for the person. I think if we approach writing with that kind of respect, you can take Orwell, you can take a lot of writers and really dig in on singular sentences.
Jordan Peterson
(00:02:01)
Yeah, well, those are the great writers because the greatest writers virtually everything they wrote is worth attending to. And I think Nietzsche is in some ways the ultimate exemplar of that because often when I read a book, I’ll mark one way or another, I often fold the corner of the page over to indicate something that I’ve found that’s worth remembering. I couldn’t do that with a book like Beyond Good and Evil because every page ends up marked. And that’s in marked contrast, so to speak, to many of the books I read now where it’s quite frequently now that I’ll read a book and there won’t be an idea in it that I haven’t come across before. And with a thinker like Nietzsche, that’s just not the case at the sentence level. And I don’t think there’s anyone that I know of who did that to a greater extent than he did.

(00:02:53)
So there’s other people whose thought is of equivalent value. I’ve returned recently, and I’m going to do a course on to the work of this Romanian historian of religions, Mircea Eliade, who’s not nearly as well known as he should be, and whose work, by the way, is a real antidote to the postmodern, nihilistic, Marxist stream of literary interpretation that the universities as a whole have adopted. And Eliade is like that too. I used this book called The Sacred and the Profane quite extensively in a book that I’m releasing in mid-November, We Who Wrestle with God, and it’s of the same sort. It’s endlessly analyzable. Eliade walked through the whole history of religious ideas and he had the intellect that enabled him to do that. And everything he wrote is dreamlike in its density. So every sentence or paragraph is evocative in an image-rich manner. And that also, what would you say deepens and broadens the scope.

(00:03:59)
That’s part of often what distinguishes writing that has a literary end from writing that’s more merely technical. The literary writings have this imagistic and dreamlike reference space around them. It takes a long time to turn a complex image into something semantic. And so if you’re writing evokes deep imagery, it has a depth that can’t be captured merely in words. And the great romantic poetic philosophers, Nietzsche is a very good example, Dostoevsky is a good example, so is Mircea Eliade, they have that quality and it’s a good way of thinking about it. It’s kind of interesting from the perspective of technical analysis of intelligence, and there’s a good book called The User Illusion, which is the best book on consciousness that I ever read. It explains the manner in which our communication is understandable in this manner. So imagine that when you’re communicating something, you’re trying to change the way that your target audience perceives and acts in the world.

(00:05:00)
So that’s an embodied issue, but you’re using words which obviously aren’t equivalent to the actions themselves. You can imagine that the words are surrounded by a cloud of images that they evoke and that the images can be translated into actions. And the greatest writing uses words in a manner that evokes images that profoundly affects perception and action. And so I would take the manner in which I act and behave, I would translate that into a set of images. My dreams do that for me, for example. Then I compress them into words. I toss you the words, you decompose them, decompress them into the images and then into the actions. And that’s what happens in a meaningful conversation. It’s a very good way of understanding how we communicate linguistically.
Lex Fridman
(00:05:51)
So if the words spring to the full visual complexity and then that can then transform itself into action.
Jordan Peterson
(00:06:00)
And change in perception because-
Lex Fridman
(00:06:01)
Change in perception. Yeah.
Jordan Peterson
(00:06:02)
Well, those are both relevant and it’s an important thing to understand because the classic empiricists make the presumption, and it’s an erroneous presumption that perception is a value-free enterprise. And they assume that partly because they think of perception as something passive. You just turn your head and you look at the world and there it is. It’s like perception is not passive. There is no perception without action ever, ever. And that’s a weird thing to understand because even when you’re looking at something like your eyes are moving back and forth, if they ever stop moving for a tenth of a second, you stop being able to see. So your eyes are jiggling back and forth just to keep them active. And then there’s involuntary movements of your eyes and then there’s voluntary movements of your eyes. What you’re doing with your eyes is very much like what a blind person would do if they were feeling out the contours of a object.

(00:06:53)
You’re sampling and you’re only sampling a small element of the space that’s in front of you, and the element that you choose to sample is dependent on your aims and your goals. So it’s value saturated. And so all your perceptions are action predicated and partly what you’re doing when you’re communicating is therefore not only changing people’s actions, let’s say, but you’re also changing the strategy that they use to perceive. And so you change the way the world reveals itself for them. See, this is why it’s such a profound experience to read a particularly deep thinker because you could also think of your perceptions as the axioms of your thought. That’s a good way of thinking about it. A perception is like a… what would you say? It’s a thought that’s so set in concrete that you now see it rather than conceptualize it. A really profound thinker changes the way you perceive the world. That’s way deeper than just how you think about it or how you feel about it.

Power and propaganda

Lex Fridman
(00:07:49)
What about not just profound thinkers, but thinkers that deliver a powerful idea, for example, utopian ideas of Marx or utopian ideas, you could say dystopian ideas of Hitler? Those ideas are powerful and they can saturate all your perception with values and they focus you in a way where there’s only a certain set of actions.
Jordan Peterson
(00:08:16)
Yeah, right. Even a certain set of emotions as well.
Lex Fridman
(00:08:19)
And it’s intense and it’s direct, and they’re so powerful that they completely altered the perception and the words spring to life.
Jordan Peterson
(00:08:27)
Yeah, it’s like a form of possession. So there’s two things you need to understand to make that clear. The first issue is that as we suggested or implied, that perception is action predicated, but action is goal predicated, the act towards goal. And these propagandistic thinkers that you described, they attempt to unify all possible goals into a coherent singularity. And there’s advantages of that. There’s the advantage of simplicity, for example, which is a major advantage. And there’s also the advantage of motivation. So if you provide people with a simple manner of integrating all their actions, you decrease their anxiety and you increase their motivation. That can be a good thing if the unifying idea that you’ve put forward is valid, but it’s the worst of all possible ideas if you put forward an invalid, unifying idea, and then you might say, well, how do you distinguish between a valid unifying idea and an invalid unifying idea?

(00:09:29)
Now, Nietzsche was very interested in that, and I don’t think he got that exactly right. But the postmodernists, for example, especially the ones, and this is most of them with the Neo-Marxist bent, their presumption is that the fundamental unifying idea is power, that everything’s about compulsion and force essentially, and that that’s the only true unifying ethos of mankind, which is, I don’t know if there’s a worse idea than that. I mean, there are ideas that are potentially as dangerous. The nihilistic idea is pretty dangerous, although it’s more of a disintegrating notion than a unifying idea. The hedonistic idea that you live for pleasure, for example, that’s also very dangerous. But if you wanted to go for sheer pathology, the notion that, and this is Foucault in a nutshell and Marx for that matter, that power rules everything. Not only is that a terrible unifying idea, but it fully justifies your own use of power.

(00:10:25)
And I don’t mean the power Nietzsche talks about. His will to power was more his insistence that a human being is an expression of will rather than a mechanism of self-protection and security. He thought of the life force in human beings as something that strived not to protect itself, but to exhaust itself in being and becoming. It’s like an upward oriented motivational drive even towards meaning. Now he called it the will to power, and that had some unfortunate consequences, at least that’s how it’s translated. But he didn’t mean the power motivation that people like Foucault or Marx became so hung up on.
Lex Fridman
(00:11:06)
So it’s not power like you’re trying to destroy the other. It’s power, full flourishing of a human being, the creative force of a human being in that way.
Jordan Peterson
(00:11:14)
Yeah. Well, you could imagine that… and you should, you could imagine that you could segregate competence and ability. Imagine that you and I were going to work on a project, we could organize our project in relationship to the ambition that we wanted to attain, and we can organize an agreement so that you were committed to the project voluntarily and so that I was committed to the project voluntarily. So that means that we would actually be united in our perceptions and our actions by the motivation of something approximating voluntary play. Now, you could also imagine another situation where I said, here’s our goal and you better help me, or I’m going to kill your family. Well, the probability is that you would be quite motivated to undertake my bidding. And so then you might say, well, that’s how the world works. It’s power and compulsion.

(00:12:09)
But the truth of the matter is that you can force people to see things your way, let’s say, but it’s nowhere near as good as strategy even practically than the strategy that would be associated with something like voluntary joint agreement of pattern of movement strategy towards a goal. See, this is such an important thing to understand because it helps you start to understand the distinction between a unifying force that’s based on power and compulsion, and one that is much more in keeping, I would say with the ethos that governs western societies, free western societies, there’s really a qualitative difference, and it’s not some morally relativistic illusion.

Nazism

Lex Fridman
(00:12:55)
If we just look at the nuance of Nietzsche’s thought, the idea he first introduced in Thus Spoke Zarathustra of the Übermensch. That’s another one that’s very easy to misinterpret because it sounds awfully a lot like it’s about power. For example, in the 20th century, it was misrepresented and co-opted by Hitler to advocate for the extermination of the inferior non-Aryan races.
Jordan Peterson
(00:13:24)
And the dominion of the superior Aryans. Yeah, yeah. Well, that was partly because Nietzsche’s work also was misrepresented by his sister after his death. But I also think that there’s a fundamental flaw in that Nietzschean conceptualization. So Nietzsche of course, famously announced the death of God, but he did that in a manner that was accompanied by dire warnings like Nietzsche said, because people tend to think of that as a triumphalist statement. But Nietzsche actually said that he really said something like the unifying ethos under which we’ve organized ourselves psychologically and socially has now been fatally undermined by, well, by the rationalist proclivity, by the empiricist proclivity. There’s a variety of reasons. Mostly it was conflict between the enlightenment view, let’s say, and the classic religious view, and that there will be dire consequences for that. And Nietzsche knew like Dostoevsky knew that, see, there’s a proclivity for the human psyche and for human societies to move towards something approximating a unity because the cost of disunity is high.

(00:14:33)
Fractionation of your goals, so that means you’re less motivated to move forward than you might be because there’s many things competing for your attention. And also anxiety, because anxiety actually signals something like goal conflict. So there’s an inescapable proclivity of value systems to unite. Now, if you kill the thing that’s uniting them, that’s the death of God, they either fractionate and you get confusion, anxiety and hopelessness, or you get social disunity or and you get social disunity or something else arises out of the abyss to constitute that unifying force. And Nietzsche said specifically that he believed that one of those manifestations would be that of communism and that that would kill… he said this in Will to Power, that that would kill tens of millions of people in the upcoming 20th century.

(00:15:28)
He could see that coming 50 years earlier. And Dostoevsky did the same thing in his book, Demons. So this is the thing that the areligious have to contend with. It’s a real conundrum because I mean, you could dispute the idea that our value systems tend towards a unity and society does as well because otherwise we’re disunified. But the cost of that disunity, as I said, is goal confusion, anxiety, and hopelessness. So it’s like a real cost. So you could dispense with the notion of unity altogether, and the Postmodernists did that to some degree, but they pulled off a sleight of hand too where they replaced it by power. Now, Nietzsche did. He’s responsible for that to some degree because Nietzsche said with his conception of the Übermensch, let’s say, is that human beings would have to create their own values because the value structure that had descended from on high was now shunted aside.

(00:16:23)
But there’s a major problem with that, many major problems. The psychoanalysts were the first people who really figured this out after Nietzsche, because imagine that we don’t have a relationship with the transcendental anymore that orients us. Okay, now we have to turn to ourselves. Now, if we were a unity, a clear unity within ourselves, let’s say, then we could turn to ourselves for that discovery. But if we’re a fractionated plurality internally, then when we turn to ourselves, we turn to a fractionated plurality. Well, that was Freud’s observation. It’s like, well, how can you make your own values when you’re not the master in your own house?

(00:17:04)
You’re a war of competing motivations, or maybe you’re someone who’s dominated by the will to force and compulsion. And so why do you think that you can rely on yourself as the source of values? And why do you think you’re wise enough to consult with yourself to find out what those values are or what they should be say in the course of a single life? I mean, it’s difficult to organize your own personal relationship like one relationship in the course of your life, let alone to try to imagine that out of whole cloth you could construct an ethos that would be psychologically and socially stabilizing and last over the long run. And of course, Marx people like that, the people who reduce human motivation to a single axis, they had the intellectual hubris to imagine that they could do that. Postmodernists are a good example of that as well.

Religion

Lex Fridman
(00:17:55)
Okay. But if we lay on the table, religion, communism, Nazism, they are all unifying ethos. They’re unifying ideas, but they’re also horribly dividing ideas. They both unify and divide. Religion has also divided people because in the nuances of how the different peoples wrestle with God, they have come to different conclusions, and then they use those conclusions that perhaps the people in power use those conclusions to then start wars, to start hatred, to divide.
Jordan Peterson
(00:18:32)
Yeah. Well, it’s one of the key sub-themes in the gospels is the sub-theme of the Pharisees. And so the fundamental enemies of Christ in the gospels are the Pharisees and the scribes and the lawyers. So what does that mean? The Pharisees are religious hypocrites. The scribes are academics who worship their own intellect, and the lawyers are the legal minds who use the law as a weapon. And so they’re the enemy of the Redeemer. That’s a subplot in the gospel stories, and that actually all means something. The Pharisaic problem is that the best of all possible ideas can be used by the worst actors in the worst possible way. And maybe this is an existential conundrum, is that the most evil people use the best possible ideas to the worst possible ends. And then you have the conundrum of how do you separate out, let’s say, the genuine religious people from those who use the religious enterprise only for their own machinations.

(00:19:37)
We’re seeing this happen online. One of the things that you’re seeing happening online, I’m sure you’ve noticed this, especially on the right wing psychopathic troll side of the distribution, is the weaponization of a certain form of Christian ideation. And that’s often marked at least online by the presence of, what would you say, cliches like Christ is king, which has a certain religious meaning, but a completely different meaning in this sphere of emerging right wing pathology, “right wing”. The political dimension isn’t the right dimension of analysis, but it’s definitely the case that the best possible ideas can be used for the worst possible purposes. And that also brings up another specter, which is like, well, is there any reliable and valid way of distinguishing truly beneficial, unifying ideas from those that are pathological? And so that’s another thing that I tried to detail out in these lectures, but also in this new book, it’s like, how do you tell the good actors from the bad actors at the most fundamental level of analysis?
Lex Fridman
(00:20:39)
And good ideas from the bad ideas, and you lecture on truth that Nietzsche also struggled with, so how do you know that communism is a bad idea versus it’s a good idea implemented by bad actors?
Jordan Peterson
(00:20:56)
Right. That’s a more subtle variant of the religious problem. And that’s what the communists say all the time, the modern day communists like, “Real communism has never been tried,” and you could say, I suppose with some justification, you could say that real Christianity has never been tried because we always fall short of the ideal mark. My rejoinder to the communists is something like every single time it’s been implemented, wherever it’s been implemented regardless of the culture and the background of the people who’ve implemented it, it’s had exactly the same catastrophic consequences. It’s like, I don’t know how many examples you need of that, but I believe we’ve generated sufficient examples so that that case is basically resolved. Now, the general rejoinder to that is it’s really something like, “Well, if I was in charge of the communist enterprise, the utopia would’ve come about,” but that’s also a form of dangerous pretense.

(00:21:55)
Part of the way… See, that problem is actually resolved to some degree in the notion of… in the developing notion of sacrifice that emerges in the western canon over thousands and thousands of years. So one of the suggestions, for example, and this is something exemplified in the passion story, is that you can tell the valid holder of an idea because that holder will take the responsibility for the consequences of his idea onto himself. And that’s why, for example, you see one way of conceptualizing Christ in the gospel story is as the ultimate sacrifice to God. So you might ask, well, what’s the ultimate sacrifice? And there are variants of the answer to that. One form of ultimate sacrifice is the sacrifice of a child, the offering of a child, and the other is the offering of the self. And the story of Christ brings both of those together because he’s the son of God that’s offered to God.

(00:22:52)
And so it’s a marketable resolution of that tension between ultimate sacrifice, ultimate because once you’re a parent, most parents would rather sacrifice themselves than their children. So you have something that becomes of even more value than yourself. But the sacrifice of self is also a very high order level of sacrifice. Christ is an archetype of the pattern of being that’s predicated on the decision to take… to offer everything up to the highest value, that pattern of self-sacrifice. And I think part of the reason that’s valid is because the person who undertakes to do that pays the price themselves. It’s not externalized. They’re not trying to change anyone else except maybe by example. It’s your problem. Like Solzhenitsyn pointed that out too when he was struggling with the idea of good versus evil, and you see this in more sophisticated literature.

(00:23:51)
In really unsophisticated literature or drama, there’s a good guy and the bad guy and the good guy’s all good, and the bad guy’s all bad. And in more sophisticated literature, the good and bad are abstracted. You can think of them as spirits. And then those spirits possess all the characters in the complex drama to a greater or lesser degree and that battle is fought out both socially and internally. In the high order religious conceptualizations in the West, if they culminate, let’s say in the Christian story, the notion is that battle between good and evil is fundamentally played out as an internal drama.
Lex Fridman
(00:24:35)
Yeah. So for a religious ethos, the battle between good and evil is fought within each individual human heart.
Jordan Peterson
(00:24:44)
Right. It’s your moral duty to constrain evil within yourself. And while there’s more to it than that, because there’s also the insistence that if you do that, that makes you the most effective possible like warrior, let’s say, against evil itself in the social world, that you start with the battle that occurs within you in the soul, let’s say. The soul becomes the battleground between the forces of good and evil. There’s an idea there too, which is if that battle is undertaken successfully, then it doesn’t have to be played out in the social world as actual conflict. You can rectify the conflict internally without it having to be played out as fate as Jung put it.
Lex Fridman
(00:25:28)
So what would you say to Nietzsche who called Christianity the slave morality, and his critique of religion in that way was slave morality versus master morality, and then you put an Übermensch into that?
Jordan Peterson
(00:25:40)
Well see, I would say that the woke phenomenon is the manifestation of the slave morality that Nietzsche criticized and that there are elements of Christianity that can be gerrymandered to support that mode of perception and conception. But I think he was wrong and he was wrong in his essential criticism of Christianity in that regard. Now, it’s complicated with Nietzsche because Nietzsche never criticizes the gospel stories directly. What he basically criticizes is something like the pathologies of institutionalized religion. And I would say most particularly of the, what would you say, of the sort of casually too nice Protestant form, that’s a thumbnail sketch and perhaps somewhat unfair.

(00:26:37)
But given the alignment, let’s say, of the more mainstream Protestant movements with the woke mob, I don’t think it’s an absurd criticism. It’s something like the degeneration of Christianity into the notion that good and harmless are the same thing, or good and empathic are the same thing, which is simply not true and far too simplified. And I also think Nietzsche was extremely wrong in his presumption that human beings should take it to themselves to construct their own values. I think he made a colossal error in that presumption.
Lex Fridman
(00:27:13)
And that is the idea of the Übermensch, that the great individual, the best of us should create our own values.
Jordan Peterson
(00:27:20)
Well, and I think the reason that he was wrong about that is that, so when God gives instructions to Adam and Eve in the Garden of Eden, he basically tells them that they can do anything they want in the walled garden. So that’s the kind of balance between order and nature that makes up the human environment. Human beings have the freedom vouchsafe to them by God to do anything they want in the garden except to mess with the most fundamental rule. So God says to people, “You’re not to eat of the fruit of the tree, of the knowledge of good and evil,” which fundamentally means there is an implicit moral order and you’re to abide by it. Your freedom stops at the foundation. And you can think about that. I’d be interested even in your ideas about this as an engineer, let’s say, is that there is an ethos that’s implicit in being itself, and your ethos has to be a reflection of that, and that isn’t under your control.

(00:28:18)
You can’t gerrymander the foundation because your foundational beliefs have to put you in harmony like musical harmony with the actual structure of reality as such. So I can give you an example of that. So our goal insofar as we’re conducting ourselves properly, is to have the kind of interesting conversation that allows both of us to express ourselves in a manner that enables us to learn and grow, such that we can share that with everyone who’s listening. And if our aim is true and upward, then that’s what we’re doing. Well, that means that we’re going to have to match ourselves to a pattern of interaction, and that’s marked for us emotionally. Like you and I both know this, if we’re doing this right…
Jordan Peterson
(00:29:03)
…marked for us emotionally. Like you and I both know this, if we’re doing this right, we’re going to be interested in the conversation. We’re not going to be looking at our watch. We’re not going to be thinking about what we’re aiming at. We’re just going to communicate. Now, the religious interpretation of that would be that we were doing something like making the redemptive logos manifest between us in dialogue, and that’s something that can be shared.

(00:29:22)
To do that, we have to align with that pattern. I can’t decide that there’s some arbitrary way that I’m going to play you. I mean, I could do that if I was a psychopathic manipulator. But to do that optimally, I’m not going to impose a certain A priori aim, let’s say, on our communication and manipulate you into that. So the constraints on my ethos reflect the actual structure of the world.

(00:29:55)
This is the communist presumptions. It’s like, we’re going to burn everything down and we’re going to start from scratch. And we’ve got these axiomatic presumptions, and we’re going to put them into place. And we’re going to socialize people so they now think and live like communists from day one. And human beings are infinitely malleable, and we can use a rational set of presuppositions to decide what sort of beings they should be.

(00:30:17)
The transhumanists are doing this too. It’s like, no, there’s a pattern of being that you have to fall into alignment with. I think it’s the pattern of being, by the way, that if you fall into alignment with, it gives you hope, it protects you from anxiety, and it gives you a sense of harmony with your surroundings and with other people. And none of that’s arbitrary.
Lex Fridman
(00:30:39)
But don’t you think we both arrived to this conversation with rigid axioms? Maybe we’re blind to them, but in the same way that the Marxists came with very rigid axioms about the way the world is and the way it should be. Aren’t we coming to that?
Jordan Peterson
(00:30:54)
Well, we definitely come to the conversation with a hierarchy of foundational axioms. And I would say the more sophisticated you are as a thinker, the deeper the level at which you’re willing to play. So imagine first that you have presumptions of different depth. There’s more predicated on the more fundamental axioms, and then that there’s a space of play around those.

(00:31:17)
And that space of play is going to depend on the sophistication of the player, obviously. But those who are capable of engaging in deeper conversations talk about more fundamental things with more play. Now, we have to come to the conversation with a certain degree of structure, because we wouldn’t be able to understand each other or communicate if a lot of things weren’t already assumed or taken for granted.
Lex Fridman
(00:31:43)
How rigid is the hierarchy of axioms that religion provides? This is what I’m trying to understand, the rigidity of that hierarchy.
Jordan Peterson
(00:31:51)
It’s as rigid as play.
Lex Fridman
(00:31:53)
Well, play is not rigid at all.
Jordan Peterson
(00:31:54)
No, no, no, no, no, no. It’s got a rigidity.
Lex Fridman
(00:31:56)
There’s some constraints.
Jordan Peterson
(00:31:58)
It took me about 40 years to figure out the answer to that question. I’m serious about that. It wasn’t a random answer. So play is very rigid in some ways. If you and I go out to play basketball or chess, there are rules and you can’t break the rules because then you’re no longer in the game. But then there’s a dynamism within those rules that’s… Well, with chess, it’s virtually infinite. I mean, I think, what is it?

(00:32:22)
There’s more patterns of potential games on a chessboard than there are subatomic particles in the observable universe. It’s an insane space. So it’s not like there’s not freedom within it. But it’s a weird paradox in a way, isn’t it? Because music is like this too, is that there are definitely rules. You can’t throw a basketball into a chess board and still be playing chess. But weirdly enough, if you adhere to the rules, the realm of freedom increases rather than decreasing.

(00:32:54)
I think you can make the same case for a playful conversation. It’s like we’re playing by certain rules and a lot of them are implicit, but that doesn’t mean that… It might mean the reverse of constraint. Because in this seminar, for example, that I was referring to, the Exodus Seminar and then the Gospel Seminar, everybody in this seminar, there’s about eight of us, played fair.

(00:33:16)
Nobody used power. Nobody tried to prove they were right. They put forward their points, but they were like, “Here’s a way of looking at that. Assess it.” They were also doing it genuinely. It’s like, this is what I’ve concluded about say this story. And I’m going to make a case for it, but I’d like to hear what you have to say because maybe you can change it, you can extend it, you can find a flaw in it.

(00:33:41)
Well, that’s a conversation that has flow and that’s engaging and that other people will listen to as well. See, I think that one of the things that we can conclude now, and we can do this even from a neuroscientific basis, is that that sense of engaged meaning is a marker not only for the emergence of harmony between you and your environment, but for the emergence of that harmony in a way that is developmentally rich, that moves you upward towards…

(00:34:08)
What would you say? Well, I think towards a more effective entropic state. That’s actually the technical answer to that. But it makes you more than you are, and there’s a directionality in that.

Communism

Lex Fridman
(00:34:20)
The reason I like talking about communism because it has clearly been shown as a set of ideas to be destructive to humanity. But I would like to understand from an engineering perspective the characteristics of communism versus religion where you could identify religious thought is going to lead to a better human being, a better society and communist Marxist thought does not.

(00:34:49)
Because there’s ambiguity, there’s room for play in communism and Marxism, because they had a utopian sense of where everybody’s headed, don’t know how it’s going to happen. Maybe revolution is required. But after the revolution is done, we’ll figure it out. And there’s an underlying assumption that maybe human beings are good and they’ll figure it out once you remove the oppressor.

(00:35:11)
I mean, all these ideas, until you put them into practice, it can be quite convincing if you were in the 19th century. If I was reading, which is fascinating, the 19th century produced such powerful ideas, Marx and Nietzsche.
Jordan Peterson
(00:35:28)
Fascism too, for that matter.
Lex Fridman
(00:35:29)
Fascism. So if I was sitting there, especially if I’m feeling shitty about myself, a lot of these ideas are pretty powerful as a way to plug the nihilist hole.
Jordan Peterson
(00:35:42)
Yeah, right, absolutely. Well, and some of them may actually have an appropriate scope of application. It could be that some of the foundational axioms of communism, socialism/communism, are actually functional in a sufficiently small social group, maybe a tribal group even. I’m not sure this is correct, but I have a suspicion that the pervasive attractiveness of some of the radical left ideas that we’re talking about are pervasive precisely because they are functional within say families, but also within the small tribal groups that people might’ve originally evolved into.

(00:36:19)
And that once we become civilized, so we produce societies that are united even among people who don’t know one another, different principles have to apply as a consequence of scale. So that’s partly an engineering response, but I think there’s a deeper way of going after the communist problem. So I think part of the fundamental problem with the communist axioms is the notion that the world of complex social interactions can be simplified sufficiently so that centralized planning authorities can deal with it.

(00:36:54)
And I think the best way to think about the free exchange rejoinder to that presumption is no, the sum total of human interactions in a large civilization are so immense that you need a distributed network of cognition in order to compute the proper way forward. And so what you do is you give each actor their domain of individual choice so that they can maximize their own movement forward.

(00:37:18)
And you allow the aggregate direction to emerge from that rather than trying to impose it from the top down, which I think is computationally impossible. So that might be one engineering reason why the communist solution doesn’t work. Like I read in Solzhenitsyn, for example, that the Central Soviet authorities often had to make 200 pricing decisions a day. Now, if you’ve ever started a business or created a product and had to wrestle with the problem of pricing, you’d become aware of just how intractable that is.

(00:37:52)
How do you calculate worth? Well, there’s the central existential problem of life. How do you calculate worth? It’s not something like a central authority can sit down and just manage. There is a lot of inputs that go into a pricing decision. And the free market answer to that is something like, well, if you get the price right, people will buy it and you’ll survive.
Lex Fridman
(00:38:14)
This is a fascinating way to describe how ideas fail. So communism perhaps fails because just like with people who believe the earth is flat, when you look outside, it looks flat, but you can’t see beyond the horizon, I guess. In the same way with communism, communism seems like a great idea in my family and people I love, but it doesn’t scale.
Jordan Peterson
(00:38:37)
And it doesn’t iterate, and that’s a form of scaling too.
Lex Fridman
(00:38:41)
Right. Well, I mean, whatever ways it breaks down, it doesn’t scale. And you’re saying religious though is a thing that might scale.
Jordan Peterson
(00:38:49)
I would say religious thought is the record of those ideas that have in fact scaled. Right, right.
Lex Fridman
(00:38:54)
And iterated.
Jordan Peterson
(00:38:55)
And iterated.
Lex Fridman
(00:38:56)
Does religious thought iterate? I mean, there’s a fundamental conservative aspect to religious thought, tradition.
Jordan Peterson
(00:39:05)
This is why I like Mircea Eliade, for example, who I referred to earlier. One of the things Eliade did and very effectively, and people like Joseph Campbell, who in some ways were popularizers of Eliade’s ideas and Carl Jung’s, what they really did was devote themselves to an analysis of those ideas that scaled and iterated across the largest possible spans of time.

(00:39:27)
And so Eliade and Jung, Erich Neumann and Campbell, they were looking and Campbell, they were looking at patterns of narrative that were common across religious traditions that had spanned millennia and found many patterns. The hero’s myth, for example, is one of those patterns. And it’s, I think, the evidence that it has its reflection in human neurophysiology and neuropsychology is incontrovertible.

(00:39:49)
And so these foundational narratives, they last. They’re common across multiple religious traditions. They unite. They work psychologically, but they also reflect the underlying neurophysiological architecture. So I can give you an example of that. So the hero myth is really a quest myth. And a quest myth is really a story of exploration and expansion of adaptation.

Hero myth


(00:40:12)
So Bilbo the Hobbit, he’s kind of an ordinary every man. He lives in a very constrained and orderly and secure world. And then the quest call comes and he goes out and he expands his personality and develops his wisdom. And that’s reflected in human neuropsychological architecture at a very low level, way below cognition. So one of the most fundamental elements of the mammalian brain, and even in lower animal forms, is the hypothalamus.

(00:40:40)
It’s the root of primary motivation. So it governs lust, and it regulates your breathing, and it regulates your hunger, and it regulates your thirst, and it regulates your temperature. Like really low level biological necessities are regulated by the hypothalamus. When you get hungry, it’s the hypothalamus. When you’re activated in a defensively aggressive manner, that’s the hypothalamus.

(00:41:04)
Half the hypothalamus is the origin of the dopaminergic tracts, and they subsume exploration. And so you could think of the human motivational reality as a domain that’s governed by axiomatic motivational states, love, sex, defensive aggression, hunger, and another domain that’s governed by exploration. And the rule would be something like when your basic motivational states are sated, explore.

(00:41:32)
And that’s not cognitive. Like I said, this is deep, deep brain architecture. It’s extraordinarily ancient. And the exploration story is something like go out into the unknown and take the risks because the information that you discover and the skills you develop will be worthwhile, even in sating the basic motivational drives. And then you want to learn to do that in a iterative manner so it sustains across time, and you want to do it in a way that unites you with other people.

(00:42:03)
And there’s a pattern to that, and I do think that’s the pattern that we strive to encapsulate in our deep religious narratives. And I think that in many ways we’ve done that successfully.

Belief in God

Lex Fridman
(00:42:13)
What is the believe in God, how does that fit in? What does it mean to believe in God?
Jordan Peterson
(00:42:21)
Okay, so in one of the stories that I cover in We Who Wrestle with God, which I only recently begun to take apart say in the last two years, is the story of Abraham. It’s a very cool story, and it’s also related, by the way, to your question about what makes communism wrong. And Dostoevsky knew this. Not precisely the Abraham story, but the same reason. In Notes from Underground, Dostoevsky made a very telling observation.

(00:42:47)
So he speaks in the voice of a cynical nihilistic and bitter bureaucrat who’s been a failure, who’s talking cynically about the nature of human beings, but also very accurately. And one of the things he points out with regards to modern utopianism is that human beings are very strange creatures.

(00:43:04)
And that if you gave them what the socialist utopians want to give them, so let’s say all your needs are taken care of, all your material needs are taken care of and even indefinitely, Dostoevsky’s claim was, well, you don’t understand human beings very well. Because if you put them in an environment that was that comfortable, they would purposefully go insane just to break it into bits just so something interesting would happen.

(00:43:28)
Right. And he says it’s the human proclivity to curse and complain. He says this in quite a cynic and caustic manner, but he’s pointing to something deep, which is that we’re not built for comfort and security. We’re not infants. We’re not after satiation. So then you might ask, well, what the hell are we after then? That’s what the Abraham story addresses. Abraham is the first true individual in the biblical narrative.

(00:43:55)
So you could think about his story as the archetypal story of the developing individual. So you said, well, what’s God? Well, in the Abraham story, God has characterized a lot of different ways in the classic religious texts. Like the Bible is actually a compilation of different characterizations of the divine with the insistence that they reflect an underlying unity. In the story of Abraham, the divine is the call to adventure.

(00:44:21)
So Abraham has the socialist utopia at hand. He’s from a wealthy family, and he has everything he needs. And he actually doesn’t do anything until he’s in his 70s. Now, hypothetically, people in those times lived much longer. But a voice comes to Abraham and it tells him something very specific. It says, “Leave your zone of comfort. Leave your parents. Leave your tent. Leave your community. Leave your tribe. Leave your land. Go out into the world.”

(00:44:52)
And Abraham thinks, well, why? I’ve got naked slave girls peeling grapes and feeding them to me. It’s like, what do I need an adventure for? And God tells them, and this is the covenant, by the way, part of the covenant that the God of the Israelites makes with his people. It’s very, very specific. It’s very brilliant. He says, “If you follow the voice of adventure, you’ll become a blessing to yourself.”

(00:45:18)
So that’s a good deal because people generally live at odds with themselves. And he says, God says, “That’s not all. You’ll become a blessing to yourself in a way that furthers your reputation among people and validly, so that you’ll accomplish things that were real and people will know it. And you’ll be held high in their esteem and that will be valid.” So that’s a pretty good deal because social people would like to be regarded as of utility and worth by others.

(00:45:49)
And so that’s a good deal. And God says, “That’s not all. You’ll establish something of lasting permanent and deep value.” That’s why Abraham becomes the father of nations. And finally, he caps it off and he says, “There’s a better element even to it. There’s a capstone. You’ll do all three of those things in a way that’s maximally beneficial to everyone else.” And so the divinity in the Abrahamic story is making a claim.

(00:46:14)
He says, first of all, there’s a drive that you should attend to, so the spirit of adventure that calls you out of your zone of comfort. Now, if you attend to that and you make the sacrifices necessary to follow that path, then the following benefits will accrue to you. Your life will be a blessing. Everyone will hold you in high esteem. You’ll establish something of permanent value, and you’ll do it in a way that’s maximally beneficial to everyone else.

(00:46:40)
And so think about what this means biologically or from an engineering standpoint. It means that the instinct to develop that characterizes outward moving children, let’s say, or adults is the same instinct that allows for psychological stability, that allows for movement upward in a social hierarchy that establishes something iterable, and that does that in a manner that allows everyone else to partake in the same process.

(00:47:07)
Well, that’s a good deal. I can’t see how it cannot be true, because the alternative hypothesis would be that the spirit that moves you beyond yourself to develop, the spirit of a curious child, let’s say, what, is that antithetical to your own esteem? Is that antithetical to other people’s best interest? Is it not the thing that increases the probability that you’ll do something permanent? That’s a stupid theory.
Lex Fridman
(00:47:33)
So God is a call to adventure with some constraints.
Jordan Peterson
(00:47:38)
A call to true adventure.
Lex Fridman
(00:47:40)
To true adventure.
Jordan Peterson
(00:47:40)
True adventure. Yeah. And then that’s a good observation because that begs the question, what constitutes the most true adventure? Well, that’s not fully fleshed out until, at least from the Christian perspective, let’s say, that’s not fully fleshed out until the gospels, because the Passion of Christ is the… This is the perfectly reasonable way of looking at it. The Passion of Christ is the truest adventure of Abraham.

(00:48:07)
That’s a terrible thing, A, because the passion story is a catastrophic tragedy, although it obviously has its redemptive elements. But one of the things that’s implied there is that there’s no distinction between the true adventure of life and taking on the pathway of maximal responsibility and burden. And I can’t see how that cannot be true. Because the counter hypothesis is, well, Lex, the best thing for you to do in your life is to shrink from all challenge and hide, to remain infantile, to remain secure, not to ever push yourself beyond your limits, not to take any risks. Well, no one thinks that’s true.
Lex Fridman
(00:48:48)
So basically, the maximally worthwhile adventure could possibly be highly correlated with the hardest possible available adventure.
Jordan Peterson
(00:48:58)
The hardest possible available adventure voluntarily undertaken.
Lex Fridman
(00:49:03)
Does it have to be voluntary?
Jordan Peterson
(00:49:04)
Absolutely.
Lex Fridman
(00:49:05)
How do you define voluntarily?
Jordan Peterson
(00:49:06)
Well, here’s an example of that. That’s a good question too. The night before the crucifixion, which in principle he knows is coming, he asks God to relieve him of his burden, and understandably so. I mean, that’s the scene famously in which he’s literally sweating blood because he knows what’s coming. And the Romans designed crucifixion to be the most agonizing, humiliating, and disgusting possible death. Right. So there was every reason to be apprehensive about that.

(00:49:41)
And you might say, well, could you undertake that voluntarily as an adventure? And the answer to that is something like, well, what’s your relationship with death? That’s a problem you have to solve. And you could fight it and you could be bitter about it. And there’s reasons for that, especially if it’s painful and degrading. But the alternative is something like… Well, it’s what’s fleshed out in religious imagery always.

(00:50:07)
It’s very difficult to cast into words. It’s like, no, you welcome the struggle. That’s why I called the book, We Who Wrestle with God. You welcome the struggle. And Lex, I don’t see how you can come to terms with life without construing it as something like, bring it on. Welcome the struggle. I can’t see that there’s a limit to that. It’s like, well, I welcome the struggle until it gets difficult.
Lex Fridman
(00:50:37)
So there’s not a bell curve, like the struggle of moderation. Basically, you have to welcome whatever as hard as it gets, and the crucifixion in that way is a symbol.
Jordan Peterson
(00:50:48)
Of that. Well, it’s worse than that in some ways because the crucifixion exemplifies the worst possible death. But that isn’t the only element of the struggle. Because mythologically, classically, after Christ’s death, he harrows hell. And what that means, as far as I can tell psychologically, is that you’re not only required, let’s say, to take on the full existential burden of life and to welcome it regardless of what it is and to maintain your upward aim despite all temptations to the contrary, but you also have to confront the root of malevolence itself.

(00:51:26)
So it’s not merely tragedy. And I think the malevolence is actually worse. The reason I think that is because I know the literature on post-traumatic stress disorder, and most people who encounter, let’s say, a challenge that’s so brutal that it fragments them, it isn’t mere suffering that does that to people. It’s an encounter with malevolence that does that to people.

(00:51:48)
Their own sometimes often, by the way. Soldier will go out into a battlefield and find out that there’s a part of him that really enjoys the mayhem, and that conceptualization doesn’t fit in well with everything he thinks he knows about himself and humanity. And after that contact with that dark part of himself, he never recovers. That happens to people, and it happens to people who encounter bad actors in the world too.

(00:52:15)
If you’re a naive person and the right narcissistic psychopath comes your way, you are in mortal trouble because you might die, but that’s not where the trouble ends.

Advice for young people

Lex Fridman
(00:52:25)
If there’s a young man in their 20s listening to this, how do they escape the pull of Dostoevsky’s Notes from Underground? With the eyes open to the world, how do they select the adventure?
Jordan Peterson
(00:52:39)
So there’s other characterizations of the divine say in the Old Testament story. So one pattern of characterization that I think is really relevant to that question is the conception of God as calling and conscience. Okay, so what does it mean? It’s a description of the manner in which your destiny announces itself to you. I’m using that terminology, and it’s distinguishable say from Nietzsche’s notion that you create your own values.

(00:53:09)
It’s like part of the way you can tell that that’s wrong is that you can’t voluntarily gerrymander your own interests. You find some things interesting, and that seems natural and autonomous, and other things you don’t find interesting and you can’t really force yourself to be interested in them. So what is the domain of interest that makes itself manifest to you? Well, it’s like an autonomous spirit. It’s like certain things in your field of perception are illuminated to you.

(00:53:39)
You think, “Oh, that’s interesting. That’s compelling. That’s gripping.” Rudolf Otto, who studied the phenomenology of religious experience, describe that as numinous. The thing grips you because compelled by it, and maybe it’s also somewhat anxiety provoking. It’s the same reaction like a cat has to a dog. When the cat’s hair stands on end, that’s an awe response. And so there’s going to be things in your phenomenological field that pull you forward, compel you.

(00:54:10)
That’s like the voice of positive emotion and enthusiasm. Things draw you into the world. It might be love. It might be aesthetic interest. It might be friendship. It might be social status. It might be duty and industriousness. There’s various domains of interest that shine for people. That’s on the positive side. God is calling. That would be akin to the spirit of adventure for Abraham. But there’s also God as conscience, and this is a useful thing to know too.

(00:54:44)
Certain things bother you. They take root within you and they turn your thoughts towards certain issues. Like there are things you’re interested in that you’ve pursued your whole life. There are things I’m interested in that I felt as a moral compulsion. And so you could think and I think the way you can think about it technically is that something pulls you forward so that you move ahead and you develop.

(00:55:11)
And then another voice, this a voice of negative emotion, says while you’re moving forward, stay on this narrow pathway. And it’ll mark deviations, and it marks deviations with shame and guilt and anxiety, regret. And that actually has a voice. Don’t do that. Well, why not? Well, you’re wandering off the straight narrow path. So the divine marks the pathway forward and reveals it, but then puts up the constraints of conscience. And the divine in the Old Testament is portrayed not least as the dynamic between calling and conscience.
Lex Fridman
(00:55:46)
What do you do with the negative emotions? You didn’t mention envy. There’s some really dark ones that can really pull you into some bad places, envy, fear.
Jordan Peterson
(00:55:55)
Yeah, envy is a really bad one. Pride and envy are among the worst. Those are the sins of Cain, by the way, in the story of Cain and Abel, because Cain fails because his sacrifices are insufficient. He doesn’t offer his best. And so he’s rejected and that makes him bitter and unhappy. And he goes to complain to God, and God says to him two things. God tells him, “If your sacrifices were appropriate, you’d be accepted.” It’s a brutal thing. It’s a brutal rejoinder. And he also says, “You can’t blame your misery on your failure.

(00:56:29)
You could learn from your failure. When you failed, you invited in the spirit of envy and resentment, and you allowed it to possess you. And that’s why you’re miserable.” And so Cain is embittered by that response, and that’s when he kills Abel. You might say, well, how do you fortify yourself against that pathway of resentment? Part of classic religious practice is aimed to do that precisely. What’s the antithesis of envy? Gratitude. That’s something you can practice. And I mean, literally practice.
Lex Fridman
(00:57:02)
I think envy is one of the biggest enemies for a young person because basically you’re starting from nowhere. Life is hard. You’ve achieved nothing. And you’re striving and you’re failing constantly because…
Jordan Peterson
(00:57:17)
And you see other people whom you think aren’t having the same problem.
Lex Fridman
(00:57:21)
Yeah, and they succeed. And they could be your neighbor, they could be succeeding by a little bit, or somebody on the internet succeeding by a lot. And I think that that can really pull a person down. That kind of envy can really destroy a person.
Jordan Peterson
(00:57:34)
Yeah, yeah, definitely. Well, the gratitude element would be something like, well, yeah, you don’t know anything and you’re at the bottom, but you’re not 80. One of the best predictors of wealth in the United States is age. So then you might say, well, who’s got it better, the old rich guy or the young poor guy? And I would say most old rich guys would trade their wealth for youth. So it’s…
Jordan Peterson
(00:58:03)
Old rich guys would trade their wealth for youth. So it’s not exactly clear at all at any stage who’s got the upper hand, who’s got the advantage? And you could say, “Well, I’ve got all these burdens in front of me because I’m young and oh my God.” Or you could say, “Every dragon has its treasure.” And that’s actually a pattern of perception. I’m not saying that people don’t have their challenges. They certainly do. But discriminating between a challenge and an opportunity is very, very difficult. And learning to see a challenge as an opportunity, that’s the beginning of wisdom.
Lex Fridman
(00:58:36)
It’s interesting. I don’t know how it works. Maybe you can elucidate, but when you have envy towards somebody, if you just celebrate them, so gratitude, but actually as opposed to sort of ignoring and being grateful for the things you have, literally celebrate that person. It transforms … It lights the way. I don’t know why that is exactly.
Jordan Peterson
(00:59:01)
Absolutely. The only reason you’re envious is because you see someone who has something that you want. Okay, so let’s think about it. Well, first of all, the fact that they have it means that in principle, you could get it. At least someone has. So that’s a pretty good deal. And then you might say, “Well, the fact that I’m envious of that person means that I actually want something.” And then you might think, “Well, what am I envious of? I’m envious of their attractiveness to women.” It’s like, okay, well now you know something about yourself. You know that one true motivation that’s making itself manifest to you is that you wish that you would be the sort of person who is attractive to women. Now, of course, that’s an extremely common longing among men, period. But particularly among young men. It’s like, well, what makes you so sure you couldn’t have that?

(00:59:52)
Well, how about, here’s an answer. You don’t have enough faith in yourself. And maybe you don’t have enough faith in, well, I would say the divine. You don’t believe that the world is characterized by enough potentiality so that even miserable you has a crack at the brass ring. I talked about this actually practically in one of my previous books, because I wrote a chapter called Compare Yourself to Who You Are and Not to Someone Else at the Present Time. Well, why? Well, your best benchmark for tomorrow is you today. And you might not be able to have what someone else has on the particular axis you’re comparing yourself with them on, but you could make an incremental improvement over your current state regardless of the direction that you’re aiming.

(01:00:38)
And it is the case, and this is a law. The return on incremental improvement is exponential or geometric and not linear. So even if you start … This is why the hero is always born in a lowly place, mythologically. Christ, who redeems the world is born in a manger with the animals to poverty parents in the middle of a God-forsaken desert in a non-descript time and place, isolated. Well, why? Well, because everyone young struggles with their insufficiency. But that doesn’t mean that great things can’t make themselves manifest. And part of the insistence in the biblical text, for example, is that it’s incumbent on you to have the courage to have faith in yourself and in the spirit of reality, the essence of reality, regardless of how you construe the evidence at hand. Right. Look at me, I’m so useless. I don’t know anything. I don’t have anything. It’s hopeless. I don’t have it within me. The world couldn’t offer me that possibility. Well, what the hell do you know about that?

(01:01:48)
This is what job figures out in the midst of his suffering in the Book of Job, because Job is tortured terribly by God, who makes a bet with Satan himself to bring him down. And Job’s decision in the face of his intense suffering is, “I’m not going to lose faith in my essential goodness, and I’m not going to lose faith in the essential goodness of being itself, regardless of how terrible the face it’s showing to me at the moment happens to be.” And I think, okay, what do you make of that claim? Well, let’s look at it practically.

(01:02:23)
You’re being tortured by the arbitrariness of life. That’s horrible. Now you lose faith in yourself and you become cynical about being. So are you infinitely worse off instantly? And then you might say, “Well yeah,” but it’s really asking a lot of people that they maintain faith even in their darkest hours. It’s like, yeah, that might be asking everything from people. But then you also might ask … This is a very strange question. If you were brought into being by something that was essentially good, wouldn’t that thing that brought you into being demand that you make the best in yourself manifest? And wouldn’t it be precisely when you most need that it be that you’d be desperate enough to risk what it would take to let it emerge?
Lex Fridman
(01:03:17)
So you kind of make it seem that reason could be the thing that takes you out of a place of darkness. Finding that calling through reason. I think it’s also possible when reason fails you to just take the leap. Navigate not by reason, but by finding the thing that scares you. The risk. Take the risk, take the leap, and then figure it out while you’re in the air.
Jordan Peterson
(01:03:44)
Yeah. Well, I think that’s always part of a heroic adventure is that ability to cut the Gordian knot. But you could also ask from an engineering perspective, okay, what are the axioms that make a decision like that possible?” And the answer would be something like, I’m going to make the presumption that if I move forward in good faith, whatever happens to me will be the best thing that could possibly happen, no matter what it is. And I think that’s actually how you make an alliance with truth. And I also think that truth is an adventure. And the way you make an alliance with truth is by assuming that whatever happens to you, if you are living in truth, is the best thing that could happen, even if you can’t see that at any given moment. Because otherwise you’d say that truth would be just the handmaiden of advantage. Well, I’m going to say something truthful, and I pay a price. Well, that means I shouldn’t have said it. Well, possibly, but that’s not the only possible standard of evaluation. Because what you’re doing is you’re making the outcome, your deity. Well, I’d just reversed that and say, no, no. Truth is the deity. The outcome is variable, but that doesn’t eradicate the initial axiom. Where’s the constant? What’s the constant?

Sex

Lex Fridman
(01:05:03)
It may be when you said Abraham was being fed by naked ladies-
Jordan Peterson
(01:05:10)
That’s an interpolation, obviously, but would’ve been out of keeping for the times.
Lex Fridman
(01:05:14)
But it does make me think sort of in stark contrast in Nietzsche’s own life, that perhaps getting laid early on in life as a useful starter. Step one, get laid, and then go for adventure. There’s some basic satiation of base desires.
Jordan Peterson
(01:05:32)
So I think it’s perfectly reasonable to bring the sexual element in because it’s a powerful motivating force, and it has to be integrated. I don’t think it’s adventure. It’s romantic adventure.
Lex Fridman
(01:05:42)
Right, but the lack of basic interaction, sexual interaction, I feel like is the engine that drives towards that cynicism of the incel in Dostoevsky’s Notes from Underground.
Jordan Peterson
(01:05:57)
There’s very little doubt about that. We know perfectly well anthropologically that the most unstable social situation you can generate is young men with no access to women. That’s not good. They’ll do anything, anything to reverse that situation. So that’s very dangerous.

(01:06:15)
But then I would also say there’s every suggestion that the pathway of adventure itself is the best pathway to romantic attractiveness. And we know this, in some ways in very blunt manner. The Google boys, the engineers who are too … What would you say? Naively oriented towards empirical truth to note when they’re being politically incorrect, they wrote a great book called A Billion Wicked Thoughts, which I really like. It’s a very good book. And it’s engineers as psychologists. And so they’ll say all sorts of things that no one with any sense would ever say that happen to be true. And they studied the pattern of pornographic fantasy, and women like pornographic stories, not images. So women’s use of pornography is literary. Who are the main protagonists in female pornographic fantasy? Pirates, werewolves, vampires, surgeons, billionaires. Tony Stark.

(01:07:13)
And so the basic pornographic narrative is Beauty and the Beast. Those five categories. Terrible, aggressive male, tamable by the right relationship, hot erotic attraction. And so I would say to the young men who, and I have many times to the young men who are locked in isolation, it’s first of all, “Join the bloody club.” Because the default value of a 15 year-old male on the mating market is zero. And there’s reason for that. Zero is a bit of an exaggeration, but not much. And the reason for that is, well, what the hell do you know? You’re not good for anything. You have potential and maybe plenty, and hopefully that’ll be made manifest, but you shouldn’t be all upset because you’re the same loser as everyone else your age has always been since the beginning of time.

(01:08:03)
But then you might ask, “Well, what should I do about it?” and the answer is, get yourself together. Stand up straight with your shoulders back, take on some adventure, find your calling, abide by your conscience, put yourself together and you’ll become attractive. And we know this is … Look, we know this is true. The correlation between male sexual opportunity and relative masculine status is about 0.6. That’s higher than the correlation between intelligence and academic achievement. I don’t think that there’s a larger correlation between two independent phenomena in the entire social science and health literature than the correlation between relative male social status and reproductive success. It’s by far the most fundamental determinant.
Lex Fridman
(01:08:52)
What’s the cause and effect there?
Jordan Peterson
(01:08:54)
It’s a loop. Men are motivated to attain social status because it confers upon them reproductive success. And that’s not only cognitively, but biologically. I’ll give you an example of this.

(01:09:04)
There’s a documentary I watch from time to time, which I think is the most brilliant documentary I’ve ever seen. It’s called Crumb, and it’s the story of this underground cartoonist. Robert Crumb, who in high school was in the category of males for whom a date was not only not likely, but unimaginable. So he was at the bottom of the bottom rung, and almost all the reactions he got from females wasn’t just no, it was like, “Are you out of your mind?” With that contempt. And then he became successful. And so the documentary is super interesting because it tracks the utter pathology of his sexual fantasies because he was bitter and resentful. And if you want to understand the psychology of serial sexual killers and the like, and you watch Crumb, you’ll find out a lot more about that than anybody with any sense would want to know.

(01:10:01)
But then he makes this transition, and partly because he does take the heroic adventure path, and he actually has a family and children, and he is actually a pretty functional person as opposed to his brothers, one of whom commits suicide, and one of whom is literally a repeat sexual offender. It’s a brutal documentary. But what he did in his adolescence after being rejected was he found what he was interested in. He was a very good artist. He was very interested in music, and he started to pursue those single-mindedly, and he became successful. And as soon as he became successful, and the documentary tracks this beautifully, he’s immediately attractive to women. And then you might ask too, even if you’re cynical, it’s like, “Well, why do I have to perform for women?” And the answer to that is something like, why the hell should they have anything to do with you if you’re useless? They’re going to have infants. They don’t need another one.

(01:10:53)
Partly the reason that women are hypergamous, they want males who are of higher status than they are, is because they’re trying to redress the reproductive burden. And it’s substantial. The female of any species is the sex that devotes more to the reproductive function. That’s a more fundamental definition than chromosomal differentiation. And that’s taken to its ultimate extreme with humans. And so of course women are going to want someone around that’s useful, because the cost of sex for them is an 18 year-old period of dependency with an infant. So I think the adventure comes first.
Lex Fridman
(01:11:32)
Heroic adventure comes first.
Jordan Peterson
(01:11:34)
Well, it’s complex. Because the other problem, let’s say with the Crumb boys, is that their mother was extremely pathological and they didn’t get a lot of genuine feminine nurturance and affection.
Lex Fridman
(01:11:43)
Of course. The family and society are not going to help you most of the time with a heroic adventure, right? They’re going to be a barrier versus a catalyst.
Jordan Peterson
(01:11:53)
Well, in good families they’re both. Because they put up constraints on your behavior. I’ve interviewed a lot of successful people about their calling, let’s say, because I do that with all my podcast guests. How did the path that you took to success make itself manifest? And the pattern’s very typical. Almost all the people that I’ve interviewed had a mother and a father. Now, it’s not invariant, but I’d say it’s there in 99% of the time. It’s really high. And both of the parents, or at least one of them, but often both were very encouraging of the person’s interests and pathway to development.
Lex Fridman
(01:12:34)
That’s fascinating. I’ve heard you analyze it that way before, and I had a reaction to that idea, because you focused on the positive of the parents. I feel like it was the … Maybe I see biographies differently, but it feels like the struggle within the family was the catalyst for greatness in a lot of biographies. Maybe I’m misinterpreting it, but I just-
Jordan Peterson
(01:12:57)
No, no. I think that that’s a reflection, maybe … Correct me if I’m wrong. I think that’s a reflection of that dynamic between positive and negative emotion. Like my son, for example, who’s doing just fine, he’s firing on all cylinders as far as I’m concerned. He has a nice family, he gets along with his wife, he’s a really good musician, he’s got a company he’s running well. He’s a delight to be around. He was a relatively disagreeable infant. He was tough-minded, and he didn’t take no for an answer. And so there was some tussle in regulating his behavior. He spent a lot of time when he was two sitting on the steps trying to get his act together. And so that was the constraint. But that wasn’t something that was … It’s an opposition to him away because it was in opposition to the immediate manifestation of his hedonistic desires, but it was also an impetus to further development.

(01:13:56)
The rule for me when he was on the stairs was as soon as you’re willing to be a civilized human being, you can get off the stairs. And you might think, well, that’s nothing but arbitrary superego, patriarchal oppressive constraint. Or you could say, “Well, no, what I’m actually doing is facilitating his cortical maturation.” Because when a child misbehaves, it’s usually because they’re under the domination of some primordial emotional or motivational impulse. They’re angry, they’re over-enthusiastic, they’re upset, they’re selfish. It’s narrow self-centeredness expressed in a immature manner.
Lex Fridman
(01:14:34)
But see … Okay. Tell me if I’m wrong, but it feels like the engine of greatness, at least on the male side of things, has often been trying to prove the father wrong, or trying to gain the acceptance of the father. So that tension, where the parent is not encouraging like you mentioned, but is basically saying, “No, you won’t be able to do this.”
Jordan Peterson
(01:15:00)
Okay. So my observation as a psychologist has been that it’s very, very difficult for someone to get their act together unless they have at least one figure in their life that’s encouraging and shows them the pathway forward. So you can have a lot of adversity in your life, and if you have one person around who’s a good model and you’re neurologically intact, you can latch onto that model.

(01:15:22)
Now, you can also find that model in books, and people do that sometimes. I’ve interviewed people who had pretty fragmented childhoods, who turned to books and found the pattern that guided them in, let’s say, the adventures of the heroes of the past, because that’s a good way of thinking about it. And I read a book called Angela’s Ashes that was written by an Irish author, Frank McCourt. Fantastic book, beautiful book. And his father was an alcoholic of gargantuan proportions. An Irish drinker who drank every cent that came into the family and many of whose children died in poverty.

(01:16:01)
And what Frank did is a testament to the human spirit, is he sort of divided his father conceptually into two elements. There was sober morning father who was encouraging and with whom he had a relationship, and then there was drunk and useless later afternoon and evening father, and he rejected the negative and he amplified his relationship with the positive. Now, he had other things going for him, but he did a very good job of discriminating.

(01:16:35)
And partly the question that you’re raising is to what degree is it useful to have a beneficial adversary? Yeah, struggle-free progress is not possible. And I think there are situations under which where you might be motivated to prove someone in your immediate circle wrong, but then that also implies that at some level, for some reason, you actually care about their judgment. You just didn’t write them off completely.
Lex Fridman
(01:17:08)
Well, that’s why I say there’s an archetype of a young man trying to gain the approval of his father. And I think that repeats itself in a bunch of biographies that I’ve read. I don’t know. There must have been an engine somewhere that they found of approval of encouragement. Maybe in books, maybe in the mother, or maybe the role of the parents is flipped.
Jordan Peterson
(01:17:34)
Well, my father was hard to please. Very.
Lex Fridman
(01:17:38)
Did you ever succeed?
Jordan Peterson
(01:17:39)
Yes, but it wasn’t easy, ever.
Lex Fridman
(01:17:41)
When was the moment when you succeeded?
Jordan Peterson
(01:17:47)
Pretty late. Like 40, maybe later.
Lex Fridman
(01:17:51)
Was it gradual, or a moment when a shift happened?
Jordan Peterson
(01:17:59)
My father was always willing to approve of the things I did that were good, although he was not effusive by any stretch of the imagination, and the standards were very high. Now, I was probably fortunate for me. And it does bear on the question you’re asking. If you want someone to motivate you optimally … God, it’s complicated because there has to be a temperamental dance between the two people. What you really want is for someone to apply the highest possible standards to you that you’re capable of reaching. And that’s a vicious dance, because you have to have a relationship with your child to do that properly. Because if you want to be optimally motivating as a father, you keep your children on the edge. It’s like, you might not reward something in your child that you would think would be good in someone else because you think they could do better. And so my father was pretty clear about the idea that he always expected me to do better, and was that troublesome? It was like I felt often when I was young that there was no pleasing him, but I also knew that that wasn’t right. See, I actually knew that wasn’t right. Because I could remember, especially I think when I was very young, that I did things that he was pleased about. I knew that was possible. So it wasn’t unpredictable and arbitrary. It was just difficult.
Lex Fridman
(01:19:36)
It sounds like he’s hit a pretty good optimal. But for each individual human that optimal differs, and that’s what’s hard.
Jordan Peterson
(01:19:44)
Well, that’s why you have to have a relationship with your children. You have to know them. Well, with yourself too, and with your wife. You can’t hit that optimal … That optimal is probably love, because love isn’t just acceptance. Love is acceptance and encouragement. And it’s not just that either. It’s also, “No, don’t do that. That’s beneath you. You’re capable of more.” And how harsh should that be? That’s a really hard question. If you really love someone, you’re not going to put up with their stupidity. “Don’t do that.” One of the rules I had with my little kids was don’t do anything that makes you look like an idiot in public. Why? Because I don’t want you disgracing yourself. Why not? Because I like you. I think you’re great, and you’re not going to act like a bloody fool in public so that people get the wrong idea about you. No.
Lex Fridman
(01:20:40)
What about inside a relationship? A successful relationship. How much challenge, how much peace? Is a successful relationship one that is easy or one that is challenging?
Jordan Peterson
(01:20:57)
I would say to some degree that depends on your temperament. My wife is quite a provocative person, and there are times when I, I suppose … Do I wish that … There are times when I casually wish that she was easier to get along with, but as soon as I think about it I don’t think that. Because I’ve always liked her. We were friends ever since we were little kids, and she’s plays rough, and I like that, as it turns out. Now, that doesn’t mean it isn’t a pain from time to time. And that is going to be a temperamental issue to some degree, and an issue of negotiation. She plays rough, but fair. And the fair part has been establishing that it’s been part of our ongoing negotiation.
Lex Fridman
(01:21:44)
And part of it is in the play, you get to find out about yourself or what your temperament is. I don’t think that’s clear until it’s tested.
Jordan Peterson
(01:21:52)
Oh, definitely not. Definitely not. You find out all sorts of things about yourself in a relationship, that’s for sure. Well, and partly the reason that there is provocativeness, especially from women in relationship to men, is they want to test them out. It’s like … Can you hold your temper when someone’s bothering you? Well, why would a woman want to know that? Well, maybe she doesn’t want you to snap and hurt her kids. And so how’s she going to find that out? Ask you? Well, you’re going to say, “Well, I’d never do that.” It’s like, “Never eh? Let’s find out if it’s never.” So we don’t know how people test each other out in relationships, or why exactly, but it’s intense and necessary.
Lex Fridman
(01:22:34)
What’s your and what’s in general should a man’s relationship with temper be?
Jordan Peterson
(01:22:39)
You should have one and you should be able to regulate it. That’s part of that attractiveness of the monstrous that characterizes women’s fantasies. And Nietzsche pointed this out too-
Lex Fridman
(01:22:51)
Pirates.
Jordan Peterson
(01:22:51)
To go back to Nietzsche.
Lex Fridman
(01:22:51)
Yeah.
Jordan Peterson
(01:22:53)
One of Nietzsche’s claims was that most of what passes for morality is nothing but cowardice. I’d never cheat on my wife. Is there anybody asking you to that you actually find attractive, or are there dozens of people asking you to that you find attractive? It’s like, “Well, I would never cheat.” It’s like, “No, you just don’t have the opportunity.” Now, I’m not saying that everyone’s in that position that they would cheat even if they had the opportunity, because that’s not true. And it’s the same with regards to, “Oh, I’m a peaceful man.” It’s like, “No, you’re not. You’re just a weak coward. You wouldn’t dare to have a confrontation, physical or metaphysical, and you’re passing it off as morality because you don’t want to come to terms with the fact of your own weakness and cowardice.”

(01:23:38)
And part of what I would say is twisted pseudo-Christian morality that Nietzsche was criticizing was exactly of that sort, and it tied into resentment and envy. And he tied that in explicitly said that failure in life masked by the morality that’s nothing but weak cowardice turns to the resentment that undermines and destroys everything, and that does that purposefully.
Lex Fridman
(01:24:05)
Yeah, I think it was criticizing under the facade of niceness, there’s an ocean of resentment.
Jordan Peterson
(01:24:10)
Yeah, that’s for sure. For sure. That’s also the danger of being two forthcoming with people. See, this is another thing, let’s say, about my wife, who’s not particularly agreeable. She’s not particularly agreeable, but she’s not resentful, and that’s because she doesn’t give things away that she isn’t willing to. And if you’re agreeable and nice and you’re conflict avoidant, you’ll push yourself too far to please the other person, and then that makes you bitter and resentful. So that’s not helpful.
Lex Fridman
(01:24:41)
Do you think you’ll be in trouble for saying this on a podcast later?
Jordan Peterson
(01:24:45)
No, no. We know each other pretty well. And like I said, it’s a trait that I find admirable. It’s provocative and challenging.
Lex Fridman
(01:24:55)
And it seems to work.
Jordan Peterson
(01:24:57)
Well, we’ve been together 50 years, so …

Good and evil

Lex Fridman
(01:25:00)
Quick pause, bathroom break.

(01:25:02)
If we can descend from the realm of ideas down to history and reality. I would say the time between World War I and World War II was one of history’s biggest testing of ideas, and really the most dramatic kinds of ideas that helped us understand the nature of good and evil. I just want to ask you a question about good and evil. Churchill, in many ways, was not a good man. Stalin, as you’ve documented extensively, was a horrible man. But you can make the case that both were necessary for stopping an even worse human being in Hitler. So to what degree do you need monsters to fight monsters? Do you need bad men to be able to fight off greater evils?
Jordan Peterson
(01:26:12)
It’s everything in its proper place is the answer to that. We might think that our life would be easier without fear, let’s say. We might say that our life would be easier without anger or pain, but the truth of the matter is that those things are beneficial, even though they can cause great suffering, but they have to be in their proper place. And that capacity that could in one context be a terrible force for evil can in the proper context be the most potent force for good. A good man has to be formidable. And partly what that means, as far as I can tell, is that you have to be able to say no. And no means … I thought a lot about no working as a clinician, because I did a lot of strategic counseling with my clients in a lot of extremely difficult situations, and I learned to take apart what no meant-
Jordan Peterson
(01:27:03)
… called situations, and I learned to take apart what no meant. And also when dealing with my own children, because I used no sparingly because it’s a powerful weapon, let’s say, but I meant it. And with my kids, what it meant was if you continue that pattern of behavior, something you do not like will happen to you with 100% certainty. And when that’s the case and you’re willing to implement it, you don’t have to do it very often. With regards to monstrosity, it’s like weak men aren’t good. They’re just weak. That’s Nietzsche’s observation. That’s partly, again, why he was tempted to place the will to power, let’s say, and to deal with that notion in a manner that when it was tied with the revaluation of all values was counterproductive. Counterproductive in the final analysis. It’s not like there wasn’t something to what he was driving at. Formidable men are admirable and you know, don’t mess with them. Douglas Murray is a good example of that.

(01:28:05)
He’s a rather slight guy, but he’s got a spine of steel, and there’s more than a bit of what’s a monstrous in him. And Jocko Willink is like that, and Joe Rogan is like that, and you’re like that.
Lex Fridman
(01:28:17)
But there’s a different level. I mean, if you look, to me, Churchill might represent the thing you’re talking about, but World War II Hitler would not be stopped without Stalin.
Jordan Peterson
(01:28:31)
Well, I wonder. Yes, yes.
Lex Fridman
(01:28:34)
And if I may insert into this picture of complexity, Hitler would’ve not stopped until he enslaved and exterminated the entirety of the Slavic people, the Jewish people, the Slavic people, the gypsies, everybody who was not Aryan. But then Stalin in the mass rape of German women by the Red Army as they marched towards Berlin is a kind of manifestation, the full monstrosity that a person can be.
Jordan Peterson
(01:29:02)
You can easily be in a situation, you can easily, unfortunately find yourself in a situation where all you have in front of you are a variety of bad options. That’s partly why, if you have any sense, you try to conduct yourself very carefully in life because you don’t want to be in a position where you’ve made so many mistakes that all the options left to you are terrible. So you said, well, was it necessary to ally with Stalin? Well, it’s very difficult to second guess the trajectory of something as complex as World War II, but we could say casually, at least as Westerners have in general, that that alliance was necessary. Now, I think the mistake that the West made in the aftermath of World War II was in not dealing as forthrightly with the catastrophes of communism as an ideology as we did with fascism. And that’s especially true of the intellectuals in the universities.

(01:29:59)
I mean, it was very common when I was teaching both at Harvard and at the University of Toronto for the students in my personality class where we studied Solzhenitsyn, who’s actually an existential psychologist in many ways and a deep one, none of them knew anything about the Soviet atrocities. None of them knew anything about what happened in Ukraine and the death of 6 million productive people, had no idea that the communists killed tens of millions of people in the aftermath of the Russian Revolution.
Lex Fridman
(01:30:30)
They know even less about Mao and the Great Leap Forward.
Jordan Peterson
(01:30:33)
Right. Which some estimates are a hundred million people. Now when your error bars are in the tens of millions, well, that’s a real indication of a cataclysm. And nobody knows how many people died from direct oppression or indirect in the Soviet Union. 20 million, it seems like a reasonable estimate. Solzhenitsyn’s upper was higher than that.
Lex Fridman
(01:30:54)
And how do you measure the intellectual output that was suppressed and killed off the number of intellectuals, artists and writers that were put into the gulags.
Jordan Peterson
(01:31:06)
Well, farmers for that matter, and anyone who was willing to tell the truth, right? Absolutely. So, yeah, catastrophic. And so I think the West’s failure wasn’t so much allying with Stalin. I mean, it was Douglas MacArthur who wanted to continue. He thought we should just take the Soviets out after the Second World War, and they removed them from any position of authority where such a thing might be made possible and people were tired, but was MacArthur wrong? Well, he certainly wasn’t wrong in his insistence that Stalin was as big a monster as Hitler or bigger. So the valorization of the radical leftist proclivity is the sin of the West, I think more intensely than allying with Stalin.
Lex Fridman
(01:31:59)
Tricky nuanced topic. But if we look at the modern day and the threat of communism Marxism in the United States, to me it’s disrespectful to the atrocities of the 20th century to call somebody like Kamala Harris a communist. But I see the sort of escalation of the extremeness of language being used when you call somebody like Donald Trump a fascist, that it makes total sense to then use similar extreme terminology for somebody like Kamala Harris. But maybe I could ask your evaluation. If you look at the political landscape today, somebody like Joe Biden and Kamala Harris.
Jordan Peterson
(01:32:40)
Okay. Well, the first thing I would say is that I think that viewing the political landscape of today as a political landscape is actually wrong. I think it’s not the right frame of reference because what I see happening are a very small percentage of dark tetrad personality types. So Machiavellian, manipulative, narcissistic, wanting undeserved attention, psychopathic that makes them predatory parasites and sadistic, because that goes along with the other three. That’s about in the serious manifestation, that’s probably three to 5% of the population, and they’re generally kept under pretty decent control by civilized people and stable social interactions. I think that their imaginations are disinhibited by cost-free social media communication. So they gain disproportionate influence. Now, these people want undeserved recognition and social status and everything that goes along with it, and they don’t care how they get it, because when I say they want that, I mean that’s all they want.
Lex Fridman
(01:33:56)
So in the realm of social media, you mentioned, yes, but are you also suggesting that they’re overrepresented in the realm of politics, politicians and so on?
Jordan Peterson
(01:34:06)
They’re overrepresented in the realm of fractious political discourse because they can use ideas. First of all, they can use, let’s say, the benevolent ideas of the right and the benevolent ideas of the left, either one, and switch back and forth for that matter as a camouflage for what they’re actually up to.
Lex Fridman
(01:34:26)
You’ve interviewed a lot of people and you have a really powerful mind. You have a good read on people. So how do you know when you’re sitting across from a psychopath?
Jordan Peterson
(01:34:34)
I wouldn’t say that I do know. In normal social circumstances, we have evolved mechanisms to keep people like that under control. Let’s say that you and I have a series of interactions and you screw me over once. I’m not going to forget that. Now, I might not write you off because of the one time, but if it happens three times, it’s like we’re not going to play together anymore. And in normal times, most of our social networks are connected and interacting. So if you ripped me off three times and I noted that, I’m going to tell everybody I know and they’re going to tell everybody they know, and soon everyone will know, and that’s the end of your tricks. But that assumes that we know who you are and we’re in continual communication. Well, all of that’s gone online. So anonymity does that and so does the amplification of emotional intensity by the social media platforms and their algorithms.

(01:35:35)
I think what we’re doing, this is happening on Twitter continually, is we’re giving the 5% of psychopaths a radically disproportionate voice. And what they’re doing is there’s a bunch of them on the left, and they’re all, we’re so compassionate, and there’s a bunch of them on the right, and at the moment they’re all, we’re so Christian and free speech oriented. It’s like, no, you’re not. You’re narcissistic psychopaths, and that’s your camouflage. And you hide behind your anonymity and you use fractious and divisive language to attract fools and to elevate your social status and your clout. And not only that, to gain, what would you say, satisfaction for your sadistic impulses.
Lex Fridman
(01:36:19)
See, the problem is it’s hard to tell who is the psychopath and who is a heterodox truth seeker.
Jordan Peterson
(01:36:30)
Yeah. Well, if you were charitable about Tucker Carlson’s recent interview, you’d say that was exactly the conundrum he faced. And it is hard. I’ve thought about, for example, interviewing Andrew Tate, and I thought, I don’t think so. And then I thought, why? I figured it’s not obvious to me at all that he wouldn’t charm me. So I knew this guy, Robert Hare. Robert Hare was the world’s foremost authority on psychopathy. He established the field of clinical analysis of psychopathic behavior, and Hare was a pretty agreeable guy. So he would give people the benefit of the doubt, and he interviewed hundreds of serious psychopaths, like imprisoned violent offenders. And he told me in one of our conversations that every time he sat down with a violent offender psychopath, and he had a measure for psychopathy that was a clinical checklist, so he could identify the psychopaths from just the say, run-of-the-mill criminals. Every time he sat down with them, they pulled the wool over his eyes, and he videotaped the interviews. And it wasn’t until later when he was reviewing the videos that he could see what they were doing, but in person, their tricks were more sophisticated than his detection ability.

Psychopathy

Lex Fridman
(01:37:47)
Well, okay, this is fascinating because again, you’re a great interviewer. I would love it if you interviewed somebody like Putin. So this idea that you are a fool in the face of psychopathy just doesn’t jive with me.
Jordan Peterson
(01:38:00)
I’m an agreeable guy. That’s the problem. I’ll give people the benefit of the doubt.
Lex Fridman
(01:38:03)
Right. But that’s good because the way you reveal psychopathy is by being agreeable, not weak, but seeking with empathy to understand the other person. And in the details in the little nuanced ways that they struggle with questions, the psychopathy is revealed just to separate the two things. So one over-representation, psychopathy online with anonymity. That’s a serious fascinating problem. But in the interview one-on-one, I don’t know if the job of a human being in conversation is to not talk to psychopaths, but to talk… How would you interview Hitler?
Jordan Peterson
(01:38:49)
Well, I’ve had very difficult clinical interviews with people in my clinical practice.
Lex Fridman
(01:38:56)
How do you approach that?
Jordan Peterson
(01:38:57)
Well, I really probably approach that the way I approach most conversations. And it’s something like, I’m going to assume that you’re playing a straight game, but I’m going to watch, and if you throw the odd crooked maneuver in, then I’ll note it. And after you do it three times, I’ll think, okay, I see. I thought we were playing one game, but we’re actually playing another one. And if I’m smart enough to pick that up, that usually works out quite successfully for me. But I’m not always smart enough to pick that up.
Lex Fridman
(01:39:30)
But see, here’s the nice thing. There’s the one-on-one conversation that’s not recorded is different than one that’s listened by a lot of people because I would venture to… I trust the intelligence of the viewer and the listener to detect even better than you.
Jordan Peterson
(01:39:44)
Yes. And I think that’s true, by the way.
Lex Fridman
(01:39:46)
To detect this psychopathy.
Jordan Peterson
(01:39:47)
Yeah. I’ve had the odd interview with people that I wasn’t happy with having organized because I felt that I had brought their ideas to a wider audience than might’ve been appropriate. But my conclusion and the conclusion of my producers and the people I talked to was that we could run the interview, the discussion and let the audience sort it out. And I would say they do. I think as a general rule of thumb, that’s true. And I also think that the long form interviews are particularly good at that because it’s not that easy to maintain a manipulative stance, especially if you’re empty for two and a half hours. So you get tired, you get irritable, you show that you lose the track, you’re going to start leaking out your mistakes.
Lex Fridman
(01:40:38)
And that actually is the case for all the world leaders. I would say one hour is too short. Something happens at two hour plus mark where you start to leak. And I trust in the intelligence of the listener to detect that.
Jordan Peterson
(01:40:56)
Yeah. And it might be the intelligence of the distributed crowd. And I mean, that’s what I’ve seen with the YouTube interviews is that it’s hard to fool people as such over a protracted period of time. And I guess it’s partly because everybody brings a slightly different set of falsehood detectors to the table. And if you aggregate that, it’s pretty damn accurate.
Lex Fridman
(01:41:21)
But of course, it’s complicated because ideas of Nazi ideology spread in the twenties. There was a real battle between Marxism and Nazism.
Jordan Peterson
(01:41:30)
Oh, yeah.
Lex Fridman
(01:41:31)
And I believe there’s some attempts at censorship of Nazi ideology. Censorship very often does the opposite. It gives the fringe ideologies power if they’re being censored, because that’s an indication that the man in power doesn’t want the truth to be hurt, this kind of idea. And that just puts fuel to the fire.
Jordan Peterson
(01:41:56)
It also motivates the paranoid types because one of the reasons that paranoia spirals out of control is because paranoid people almost inevitably end up being persecuted because they’re so touchy and so suspicious that people start to walk on eggshells around them as if there are things going on behind the scenes. And so then they get more distrustful and more paranoid, and eventually they start misbehaving so badly that they are actually persecuted often by legal authorities, and it’s down the rabbit hole they go. And so Musk is betting on that to some degree. Right? He believes that free expression on Twitter X will sort itself out and be of net benefit. And I follow a lot of really bad accounts on X because I like to keep an eye on the pathology of the left, let’s say, and the pathology of the right thinking, at least in my clinical way, that I’m watching the psychopaths dance around and try to do what their subversion.

(01:42:57)
And it’s an ugly place to inhabit, that’s for sure. But it’s also the case that a very tiny minority of seriously bad actors can have a disproportionate influence. And one of the things I’ve always hoped for for social media channels is that they separate the anonymous accounts from the verified accounts. They should just be in different categories. People who will say what they think and take the hits to their reputation, anonymous types. If you want to see what the anonymous types say, you can see it. But don’t be confusing them with actual people because they’re not the same. We know that people behave more badly when they’re anonymous. That’s a very well-established psychological finding. Well, and I think the danger to our culture is substantive. I think the reason that perhaps the reason that everything started to go sideways pretty seriously around 2015 is because we invented these new modes of communication. We have no idea how to police them. And so the psychopathic manipulators, they have free reign. About 30% of the internet is pornography.

(01:44:02)
A huge amount of internet traffic is outright criminal. And there’s a penumbra around that’s psychopathic, narcissistic troublemaking trolls. And that might constitute the bulk of the interactions online. And it’s partly because people can’t be held responsible, so the free riders have free reign.
Lex Fridman
(01:44:19)
It’s a fascinating technical challenge of how to make our society resilient to the psychopaths on the left and the right.
Jordan Peterson
(01:44:28)
It might be the fundamental problem of the age, given the amplification of communication by our social networks.
Lex Fridman
(01:44:36)
And so to generalize across psychopaths, you could also think about bots which behave similar to psychopaths in their certainty and not caring. They’re maximizing some function. They’re not caring about anything else. Attention. Yeah.
Jordan Peterson
(01:44:49)
Yeah. Short-term attention, even worse. Yeah, because that’s another problem. If the algorithms are maximizing for the grip of short-term attention, they’re acting like immature agents of attention. Right? And so then imagine the worst-case scenario is negative emotion garners more attention and short-term gratification garners more attention. So then you’re maximizing for the grip of short-term attention by negative emotion. I mean, that’s not going to be a principle. We were talking earlier about unsustainable, unifying axioms, that’s definitely one of them. Maximize for the spread of negative attention, negative emotion that garners short-term attention. Jesus, brutal.
Lex Fridman
(01:45:38)
I tend to not think there’s that many psychopaths. So maybe to push back a little bit, it feels like there’s a small number of psychopaths.
Jordan Peterson
(01:45:50)
Three to 5% is the estimate worldwide.
Lex Fridman
(01:45:54)
In terms of humans, sure. But in terms of the pattern of stuff we see online, my hope is that a lot of people on the extreme left and extreme right, or just the trolls in general are just young people kind of going through the similar stuff that we’ve been talking about, trying on the cynicism and the resentment. There’s a drug aspect to it, there’s a pull to that to talk about shit somebody, to take somebody down. I mean, there is some pleasure in that. There’s a dark pull towards that. And I think-
Jordan Peterson
(01:46:30)
That’s the sadistic pull.
Lex Fridman
(01:46:31)
And I think a lot of people, I mean, you see, when you say sadistic, it makes it sound like some kind of, it’s a pathology.
Jordan Peterson
(01:46:37)
It’s pleasure in the suffering of others.
Lex Fridman
(01:46:39)
Right. But I just think that all of us have the capacity for that. All humans have the capacity for that.
Jordan Peterson
(01:46:47)
Some more than others, but everyone to some degree.
Lex Fridman
(01:46:49)
And when you’re young, you don’t understand the full implications of that on your own self. So if you participate in taking other people down, that’s going to have a cost on your own development as a human being. It’s going to take you towards a Dostoevsky’s, notes from underground in the basement, cynical, all that kind of stuff.
Jordan Peterson
(01:47:07)
Alone.
Lex Fridman
(01:47:08)
Which is why a lot of young people try it out. The reason is, you get older and older, you realize that there’s a huge cost to that. So you don’t do it. But there’s young people that… So I would like to sort of believe and hope that a large number of people who are trolls are just trying out the derision.
Jordan Peterson
(01:47:24)
No doubt.
Lex Fridman
(01:47:25)
So they can be saved, they could be helped. They could be shown that there’s more growth, there’s more flourishing to celebrating other people and actually criticizing ideas, but not in the way of derision LOL, but by formulating your own self in the world by formulating your ideas in a strong, powerful way, and also removing the cloak of anonymity and just standing behind your ideas and carrying the responsibility of those ideas. Yeah.
Jordan Peterson
(01:47:56)
I think all of that is right. I think the idea that that’s more likely to occur among young people, that’s clear. People as they mature, get more agreeable and conscientious. So we actually know that what you said is true technically. It’s definitely the case that there’s an innate tilt towards pleasure in that sort of behavior. And it is associated to some degree with dominance, striving. And I do think it’s true, as you pointed out, that many of the people who are toying with that pattern can be socialized out of it. In fact, maybe most people, even the repeat criminal types tend to desist in their late twenties. So 1% of the criminals commit 65% of the crimes. Imagine that that 1% are the people that you’re really concerned with. They often have stable patterns of offending that emerged very, very young, like even in infancy and continued through adolescence and into adulthood.

(01:48:56)
If you keep them in prison until they’re in the middle of their late twenties, most of them stop. And the easiest way to understand that might just be delayed maturation. So are most people salvageable? Yes, definitely. Is everyone salvageable? Well, at some point it becomes, first of all, they have to want to be salvaged. That’s a problem. But then it also becomes something like, well, how much resources are you going to devote to that? The farther down the rabbit hole you’ve gone, the more energy it takes to haul you up. So there comes a point where the probability that you’ll be able to get enough resources devoted to you to rescue you from the pit of hell that you’ve dug is zero. And that’s a very sad thing. And it’s very hard to be around someone who’s in that situation, very, very hard.
Lex Fridman
(01:49:50)
And it seems that it’s more likely that the leaders of movements are going to be psychopaths, and the followers of movements are going to be the people that we’re mentioning that are kind of lost themselves to the ideology of the movement.
Jordan Peterson
(01:50:05)
Well, we know that what you said is true even historically, to a large degree, because Germany was successfully de-Nazified. And it’s not like everybody who participated in every element of the Nazi movement was brought to justice. Not in the least. The same thing happened in Japan. So to some degree, the same thing happened in South Africa. Right? And it’s the case, for example, also in the stories that we were referring to earlier, the biblical stories that patriarchs of the Bible, most of them are pretty bad people when they first start out. Jacob is the one who becomes Israel. He’s a major player in the biblical narrative, and he’s a pretty bad actor when he first starts out. He’s a mama’s boy. He’s a liar. He steals from his own brother, and in a major way, he deceives his father. He’s a coward, and yet he turns his life around.
Lex Fridman
(01:51:05)
So be careful the leaders you idolize in worship, but then it’s not always clear to know who is the good and who’s the evil.
Jordan Peterson
(01:51:14)
Yeah.

Hardship

Lex Fridman
(01:51:15)
It’s hard. You have been through some dark places in your mind, over your life. What have been some of your darker hours, and how did you find the light?
Jordan Peterson
(01:51:27)
Well, I would say I started contending with the problem of evil very young, 13 or 14. And that was my main motivation of study for 30 years, I guess, something like that. At the end of that 30 years, I became more and more interested in fleshing out the alternative. Once I became convinced that evil existed, and that was very young, I always believed that if you could understand something well enough that you could formulate a solution to it. But it turns out that seeing evil and understanding that it exists is less complicated than a technical description of its opposite, what is good. You can say, well, it’s not that for sure. It’s not Auschwitz. How about we start there? It’s as far from Auschwitz as you can get. It’s as far from enjoying being an Auschwitz camp guard as you can get.

(01:52:38)
Okay, well, where are you when you’re as far away from that as you could possibly get? What does that mean? And it does have something to do with play, as far as I’m concerned. I think the antithesis of tyranny is play. So that took me a long time to figure out that specifically. So that was very dark. I spent a lot of time studying the worst behaviors that I could discover abstractly in books, but also in my clinical practice and in my observations of people. And so that’s rough. More recently, I was very ill and in a tremendous amount of pain that lasted pretty much without any break for three years. And what was particularly useful to me then was the strength of my relationships, my immediate relationships, my friendships. Also, the relationships that I had established more broadly with people.

(01:53:45)
Because by the time I became ill, I was reasonably well known and people were very supportive when I was having trouble, and that was very helpful. But it’s certainly the case that it was the connections I had, particularly with my family, but also with my friends, that were the saving grace. And that’s something to know. I mean, it’s necessary to bear the burdens of the world on your own shoulders, that’s for sure, the burdens of your own existence and whatever other responsibilities you can mount. But that by no means, means that you can or should do it alone. And so you might say, well, welcoming the adversity of life as a redemptive challenge is a task that’s beyond the ability of the typical person or even maybe of anyone. But then when you think, well, you’re not alone, maybe you’re not alone socially, you’re not alone familial, maybe you’re not alone metaphysically as well, there’s an insistence.

(01:54:47)
And I think it’s true. There’s an insistence, for example, in the old and the new testament alike, that the more darkness you’re willing to voluntarily encounter, the more likely it is that the spirit of Abraham and the patriarchs will walk with you. And I think that’s right. I think it’s sort of technically true in that the best parts of yourself make themselves manifest. If you want to think about it that way, the best parts of yourself, whatever that means, make themselves manifest when you’re contending actively and voluntarily with the most difficult challenges. Why wouldn’t it be that way? And then you could think, well, that’s yourself. It’s like, well, are the best unrevealed parts of you yourself? Well, no, they’re a kind of metaphysical reality. They’re not yet manifest. They only exist in potential. They transcend anything you’re currently capable of, but they have an existence. You could call that yourself.

(01:55:45)
But it was Jung’s contention, for example, with regards to such terminology that the reason we use the term self instead of God is because when God was dispensed with, let’s say, by the processes Nietzsche described, we just found the same thing deep within the instinctive realm. Let’s say we found it at the bottom…
Jordan Peterson
(01:56:03)
Deep within the instinctive realm, let’s say, we found it at the bottom of the things instead of at the top. It’s like it doesn’t matter. It doesn’t matter fundamentally. What matters is whether or not that’s a reality. And I think it’s the fundamental reality because I do think that the deeper you delve into things… This is what happens to Moses when he encounters the burning bush. So Moses is just going about his life. He’s a shepherd, he’s an adult. He has wives, he has children, he has responsibilities. He’s left his home and he’s established himself. And so things are pretty good for Moses. And then he’s out by Mount Horeb in that story, but it’s the central mountain of the world. It’s the same mountain as Sinai, which is the place where heaven and earth touch. And he sees something that grabs his attention, right?

(01:56:53)
That’s the burning bush. And bush is a tree. That’s life. That’s the tree of life. And the fact that it’s on fire is that’s life exaggerated because everything that’s alive is on fire. And so what calls to Moses is the spirit of being itself, and it tracks him off the beaten track, and he decides to go investigate. So Moses is everyone who goes off the beaten track to investigate. And so as he investigates, he delves more and more deeply until he starts to understand that he’s now walking on sacred ground. So he takes off his shoes, and that’s a symbolic reference of identity transformation. He’s no longer walking the same path. He no longer has the same identity. He’s in a state of flux. And that’s when what happens is that he continues to interact with this calling and Moses asks what it is that’s being revealed, and God says, I’m the spirit of being itself.

(01:57:51)
That’s basically the answer. I am what I am. It’s a more complex utterance than that. I am what I will be. I am what was becoming. It’s all of that at the same time, it’s the spirit of being that’s speaking to him, the spirit of being and becoming. And it tells Moses that he now, because he’s delved so deeply into something so compelling, his identity has transformed and he’s become the leader who can speak truth to power. And so he allies himself with his brother Aaron, who’s the political arm and who can communicate, and he goes back to Egypt to confront the tyrant. And that’s an indication of that idea that if you wrestle with life properly, that the spirit of being and becoming walks with you. And it’s like, how can that not be true? Because the contrary would be that there would be no growth in challenge. Well, you have to be infinitely nihilistic to believe that.
Lex Fridman
(01:58:50)
It’s obvious, but it’s also just fascinating that hardship is the thing that ends up being the catalyst for delving deeply.
Jordan Peterson
(01:59:02)
It’s hardship voluntarily undertaken. And it’s crucially true. Look, if you bring someone into therapy, let’s say they’re afraid of elevators and you trick them into getting near an elevator, you’ll make them worse. But if you negotiate with them so that they voluntarily move towards the elevator on their own recognizance, they’ll overcome their fear and they become generally braver, but it has to be voluntary.
Lex Fridman
(01:59:31)
See, I got to push back and explore with you the question of voluntarily. Let’s look at Nietzsche. He suffered through several health issues throughout his life, migraines, eyesight issues, digestive problems, depression with suicidal thoughts, and yet he is one of the greatest minds in the history of humanity. So were these problems that he was suffering, arguably involuntarily, a feature or a bug?
Jordan Peterson
(01:59:58)
That’s a good question. The same thing happens in the story of Job. Because Job is a good man. God himself admits it. And Satan comes along and says to God, “I see you’re pretty proud of your man there, Job.” God says, “Yeah, he’s doing pretty well.” And Satan says, “I think it’s just because things are easy for him. Let me have a crack at him and see what happens.” And God says, “Yeah, I think you’re wrong. Do your worst.” Right? And that’s how people feel when those slings and arrows come at them, let’s say like Nietzsche. Well Job’s response to that… Now the story is set up so that what befalls Job is actually quite arbitrary, these catastrophes that you’re describing. The volunteerism in Job is his refusal to despair even in the face of that adversity. And that seems like something like an expression of voluntary free will.

(02:00:47)
He refuses to lose faith. And the way the story ends is that Job gets everything back and more. So that’s a dissent and assent story. And a cynic might say, “Well, the ends don’t justify the means.” And I would say, “Fair enough.” But that’s a pretty shallow interpretation of the story. What it indicates instead is that if you’re fortunate, because let’s not forget that, and you optimize your attitude even in the face of adversity, that it’s not infrequently the case that your fortunes will reverse. And I’ve found that in many situations, the journalists whose goal was most malicious in relationship to me, who were most concerned with improving their own, what would you say? Fostering their own notoriety and gaining social status at my expense, were the ones who did me the greatest favor. Those were the interviews that went viral. And so that’s interesting because they were definitely the places where the most disaster was at hand. And I felt that in the aftermath every time that happened, my whole family was destabilized for two months because things… It wasn’t obvious at all which way the dice were going to roll.
Lex Fridman
(02:02:13)
But you leaned into that. So in a sense that there’s this kind of a transformation from the involuntary to the voluntary, basically saying, “Bring it on.” That act of bring it on turns the involuntary hardship into voluntary hardship.
Jordan Peterson
(02:02:29)
Well, not necessarily, let’s say, but you could say that’s your best bet. Well, I’m never going to say that you can transcend all catastrophe with the right attitude, because that’s just too much to say. But I could say that in a dire situation, there’s always an element of choice. And if you make the right choices, you improve the degree, you improve your chances of success to the maximal possible degree.
Lex Fridman
(02:03:05)
It might be too much to say, but nevertheless could be true. Viktor Frankl, Marcus Aurelius.
Jordan Peterson
(02:03:14)
Well, that’s what the resurrection story proclaims, is that even under the imaginable circumstances, the fundamental finale is the victory of the good. And that seems to me to be true.

Pain and gratitude

Lex Fridman
(02:03:33)
Do you have regrets when you look back at your life in the full analysis of it?
Jordan Peterson
(02:03:40)
Well, as I said, I was very ill for about three years, and it was seriously brutal. This is no lie. Every single minute of that three years was worse than any single time I’d ever experienced in my entire life up to that. So that was rough.
Lex Fridman
(02:03:57)
Was the roughest the physical or the psychological?
Jordan Peterson
(02:04:01)
Pain.
Lex Fridman
(02:04:02)
Just literal pain?
Jordan Peterson
(02:04:05)
Yep. Yeah, I was walking like 10 to 12 miles a day, rain or shine, winter, didn’t matter, not good. And it was worse than that because as the day progressed, my pain levels would fall until by 10, 11 at night when I was starting to get tired. I was approaching, what would you say? I was approaching something like an ordinary bad day, but as soon as I went to sleep, then the clock was reset and all the pain came back. And so it wasn’t just that I was in pain, it was that sleep itself became an enemy. And that’s really rough, man, because sleep is where you take refuge, you’re worn out, you’re tired, and you go to sleep and you wake up and it’s generally, it’s something approximating a new day.

(02:05:10)
This was Sisyphus on steroids. It was very difficult to maintain hope in that, because I would do what I could. There were times when it took me like an hour and a half in the morning to stand up. I’d do all that and more or less put myself back into something remotely resembling human by the end of the day. And then I knew perfectly well, exhausted, if I fell asleep that I was going to be right at the bottom of the bloody hill again. And so after a couple of years of that, it was definitely the fact that I had a family that carried me through that.
Lex Fridman
(02:05:45)
What did you learn about yourself, about yourself, and about the human mind from that, from all of those days?
Jordan Peterson
(02:05:55)
Well, I think I learned more gratitude for the people I had around me. And I learned how fortunate I was to have that and how crucial that was. My wife learned something similar. She was diagnosed with a form of cancer that, as far as we know, killed every single person who ever had it except her. It’s quite rare. And her experience was that what really gave her hope and played at least a role in saving her was the realization of the depth of love that her son, in particular, had for her. And that says nothing about her relationship with Mikhaila, with her daughter. It just so happened that it was the revelation of that love, that it made Tammy understand the value of her life in a way that she wouldn’t have realized of her own accord.

(02:06:51)
We’re very, very… There’s no difference between ourselves and the people that we love. And there might be no difference between ourselves and everyone everywhere, but we can at least realize that, to begin with, in the form of the people that we love. And I hope I’m better at that than I was. I think I’m better at it than I was. I’m a lot more grateful for just ordinariness than I was because when I first recovered, I remember, I first started to recover I was standing in this pharmacy waiting for a prescription in a little town, and they weren’t being particularly efficient about it.

(02:07:28)
And so I was in that, standing in the aisle for 20 minutes, and I thought, “I’m not on fire. I could just stand here for the rest of my life, just not being in pain and enjoying that.” And that would have been something that before that would have been, I would have been impatient and raring to go because I didn’t have 20 minutes to stand in the middle of an aisle. And I thought, “Well, if you’re just standing there and you’re not on fire, things are a lot better than they might be.” And I certainly, I know that, and I think I remember it almost all the time.
Lex Fridman
(02:08:04)
You gain a greater ability to appreciate the mundane moments of life.
Jordan Peterson
(02:08:09)
Yeah, definitely. The miracle of the mundane, right?
Lex Fridman
(02:08:13)
Yeah.
Jordan Peterson
(02:08:14)
I think Nietzsche had that because he was very ill. And so I suspect he had… And he was regarded by the inhabitants of the village that he lived in, near the end of his life, as something approximating a saint. He apparently conducted himself very admirably despite all his suffering.
Lex Fridman
(02:08:37)
But that still, there’s this tension, as there is in much of Nietzsche’s work, between the miracle of the mundane, appreciating the miracle of the mundane versus fearing the tyranny of the mediocre.
Jordan Peterson
(02:08:53)
It’s more the mediocre and resentful.
Lex Fridman
(02:08:55)
Yes, but that’s you giving him a pass or seeing the good.
Jordan Peterson
(02:08:59)
Well, fair enough.
Lex Fridman
(02:09:01)
There’s a kind of… I mean, the tyranny of the mediocre, I always hated this idea that some people are better than others, and I understand it, but it’s a dangerous idea.
Jordan Peterson
(02:09:12)
This is why I like the story of Cain and Abel, I would say. Because Cain is mediocre, but that’s because he refuses to do his best. It’s not something intrinsic to him. And I actually think that’s the right formulation because I had people in my clinical practice who were, they were lost in many dimensions from the perspective of comparison. One woman I remember in particular who, man, she had a lot to contend with, she was not educated, she was not intelligent. She had a brutal family, terrible history of psychiatric hospitalization. And when I met her at a hospital, she was an outpatient from the psychiatric ward, and she had been in there with people that she thought were worse off than her, and they were. And that was a long way down.

(02:10:14)
That was like Dante’s Inferno level down. It was a long-term, psychiatric inpatient ward. Some of the people had been there for 30 years. It made One Flew Over the Cuckoo’s Nest look like a romantic comedy. And she had come back to see if she could take some of those people for a walk, and was trying to find out how to get permission to do it. Better than other people. Some people are more intelligent, some people are more beautiful, some people are more athletic. Maybe it’s possible for everyone at all levels of attainment to strive towards the good. And maybe those talents that are given to people unfairly don’t privilege them in relationship to their moral conduct. And I think that’s true. There’s no evidence, for example, that there’s any correlation whatsoever between intelligence and morality. You’re not better because you’re smart. And what that also implies is if you’re smart, you can be a lot better at being worse.
Lex Fridman
(02:11:22)
I think, for myself, I’m just afraid of dismissing people because of my perception of them.
Jordan Peterson
(02:11:32)
Yeah. Well, that’s why we have that metaphysical presumption that everybody’s made in the image of God. Despite that immense diversity of apparent ability, there’s that underlying metaphysical assumption that, yeah, we all vary in our perceived and actual utility in relationship to any proximal goal, but all of that’s independent of the question of axiomatic worth. And preposterous as that notion appears to be, it seems to me that societies that accept it as a fundamental axiomatic presumption are always the societies that you’d want to live in if you had a choice. And that to me is an existence proof for the utility of the presumption. And also, if you treat people like that in your life, every encounter you have, you make the assumption that it’s a radical equality of worth despite individual variance in ability, something like that, man, your interactions go way better. I mean, everyone wants to be treated that way.

(02:12:38)
Look, here’s a developmental sequence for you, naive and trusting, hurt and cynical. Okay, well, is hurt and cynical better than naive and trusting? It’s like, yeah, probably. Is that where it ends? How about cynical and trusting as step three? And then the trust becomes courage. It’s like, yeah, I’ll put my hand out for you, but it’s not because I’m a fool. And I think that’s right, because that’s the re-instantiation of that initial trust that makes childhood magical and paradisal. But it’s the admixture of that with wisdom. It’s like, yeah, we could walk together uphill, but that doesn’t mean, and I’ll presume that that’s your aim, but that doesn’t mean that I’m not going to watch.
Lex Fridman
(02:13:34)
What’s a better life, cynical and safe or hopeful and vulnerable to be hurt?
Jordan Peterson
(02:13:42)
Oh, you can’t dispense with vulnerable to be hurt. That’s the other realization. It’s like you’re going to stake your life on something. You could stake your life on security, but it’s not going to help. You don’t have that option.
Lex Fridman
(02:13:55)
So what do you do when you’re betrayed ultimately by some people you come across.
Jordan Peterson
(02:14:02)
Grieve and look elsewhere. Do what you can to forgive, and not least, so you lighten your own burden. Maybe do what you can to help the person who betrayed you. And if that all proves impossible, then wash your hands of it and move on to the next adventure.
Lex Fridman
(02:14:27)
And do it again.
Jordan Peterson
(02:14:28)
Yeah. Yeah.

Truth

Lex Fridman
(02:14:30)
Boy, this life, something else. So we’ve been talking about some heavy, difficult topics, and you’ve talked about truth in your Nietzsche lectures and elsewhere. When you think, when you write, when you speak, how do you find what is true? Hemingway said, “All you have to do is write one true sentence.” How do you do that?
Jordan Peterson
(02:14:53)
Well, I would say first that you practice that. It’s like that question is something. And Hemingway knew this at least to some degree, and he certainly wrote about it, is that you have to orient your life upward as completely as you can, because otherwise you can’t distinguish between truth and falsehood. It has to be a practice. Now and for me, I started to become serious about that practice when I realized that it was the immorality of the individual, the resentful, craven, deceitful immorality of the individual that led to the terrible atrocities that humans engage in that make us doubt even our own worth. I became completely convinced of that. That the fundamental root cause of evil, let’s say, wasn’t economic or sociological, that it was spiritual, just psychological, and that if that was the case, you had an existential responsibility to aim upward and to tell the truth, and that everything depends on that. And I became convinced of that. And so then… Look, you set your path with your orientation. That’s how your perceptions work. As soon as you have a goal, a pathway opens up to you and you can see it. And the world divides itself into obstacles and things that move you forward. And so the pathway that’s in front of you depends on your aim. The things you perceive are concretizations of your aim. If your aim is untrue, then you won’t be able to tell the difference between truth and falsehood. And you might say, “Well, how do you know your aim is true?” It’s like, well, you course correct continually, and you can aim towards the ultimate. Are you ever sure that your aim is the right direction? You become increasingly accurate in your apprehension.
Lex Fridman
(02:16:44)
Is it part of the process to cross the line, to go outside the Overton Window, to dip a toe outside the Overton Window for a bit?
Jordan Peterson
(02:16:52)
Of course. That’s what you do in part in play. I was at the Comedy Mothership, and every single comedian was completely reprehensible. All they were doing was saying things that you can’t say. Well, but it was in play. What I’m trying to do in my lectures is I’m on the edge. I have a question I’m trying to address, and I’m trying to figure it out. I don’t know where the conversation is going. Truly, it’s an exploration, and I think the reason that the audiences respond is because they can feel that, it’s a high wire act, and I could fail. My lectures have degrees of success. Sometimes I get real fortunate and there’s a perfect narrative arc. I have a question, I’m investigating it. It comes to a punchline conclusion just at the right time, and it’s like the whole act is complete, and sometimes it’s more fragmented. But I can tell when the audience is engaged because everyone’s silent, except maybe when they’re laughing.
Lex Fridman
(02:17:47)
There’s a sense that you’re arguing with yourself when you’re lecturing. It’s beautiful. It’s really beautiful and powerful to watch. Nietzsche does the same. There’s contradictions in what you’re saying. There’s a struggle, what you’re saying. But I do think that when you’re doing the same on the internet, you get punished for the deviations. You get punished for the exploration, especially when that explores outside the Overton Window.
Jordan Peterson
(02:18:08)
Look, if you’re going to play hard in a conversation to explore, you’re going to say things that are edgy, that are going to cause trouble, and they might be wrong. And that’s another reason why free speech protection is so important. You actually have to protect the right, let’s say, in the optimal circumstance, you have to protect the right of well-meaning people to be wrong. Now, you probably have to go beyond that to truly protect it, you have to even protect the right of people who aren’t meaning well to be wrong. And we also need that because we’re not always well-meaning. The alternative to that protection would be the insistence that people only say what was 100% right all the time.
Lex Fridman
(02:18:49)
I’m also, I guess this is a call to our fellow humans not to reduce a person to a particular statement, which is what the internet tends to want to do.
Jordan Peterson
(02:19:01)
Especially if it’s the worst thing they ever said.
Lex Fridman
(02:19:03)
Yeah. Yeah.
Jordan Peterson
(02:19:04)
Yeah. Because God… Well, anyone judged by that standard is doomed unless they’re silent.
Lex Fridman
(02:19:08)
But it also just makes you not want to play.
Jordan Peterson
(02:19:11)
Yeah, right?
Lex Fridman
(02:19:11)
Not want to take radical thought experiments and carry out to the natural that conclusion.
Jordan Peterson
(02:19:16)
Well, that’s kind of the definition of a totalitarian state.
Lex Fridman
(02:19:19)
Yes.
Jordan Peterson
(02:19:19)
No one’s playing in a totalitarian state, ever.
Lex Fridman
(02:19:21)
But in this case, it’s an emergent one-
Jordan Peterson
(02:19:24)
Yeah.
Lex Fridman
(02:19:24)
… with psychopaths roaming the landscape, the barbarians.
Jordan Peterson
(02:19:29)
That might be the general pattern of totalitarianism.
Lex Fridman
(02:19:32)
Well, in totalitarianism, there’s usually one psychopath, not multiple.
Jordan Peterson
(02:19:36)
Yeah. Well, everyone else is complicit, at least in their silence.
Lex Fridman
(02:19:40)
Yeah. Does the study of the pathology of psychopaths online wear on you?
Jordan Peterson
(02:19:45)
Yes, definitely.
Lex Fridman
(02:19:47)
Do you ever consider doing less of that?
Jordan Peterson
(02:19:50)
Yes. Yes. Definitely. Probably I experienced most of that on X, but that’s also where I find most of my guests. That’s also where I get a sense of the zeitgeist, which is necessary. For example, if you’re going to be a podcast host, it’s necessary for me to make my lectures on point and up to date to get a sampling of the current moment. You have to be of the moment, in many ways, to function at a high level. There’s a price to be paid for that because you’re exposed to everything in a sense.
Lex Fridman
(02:20:35)
You can also over sample the darkness.
Jordan Peterson
(02:20:39)
Yeah. Yeah, definitely.
Lex Fridman
(02:20:40)
And it can make you more and more cynical. It’s a danger, right?
Jordan Peterson
(02:20:43)
Yeah. Yeah. Well, luckily for me, I have many things that counterbalance that, the familial relationships we talked about, the friendships, and then also all of the public things I do are positive. The lecture tours, for example, which I’m on a lot, they’re basically 100% positive, so I’m very well buttressed against that-
Lex Fridman
(02:21:08)
That’s great to hear.
Jordan Peterson
(02:21:09)
… darker element.
Lex Fridman
(02:21:10)
As a fan in the arena, watching the gladiators fight, your mind is too important to be lost to the cynical, to the battles with the abyss.
Jordan Peterson
(02:21:22)
You have a moral obligation too, to maintain a positive orientation. It’s a moral obligation. The future is, of course, rife with contradictory possibilities, and I suppose in some ways, the more rapid the rate of transformation, the more possibility for good and for evil is making itself manifest at any moment. But it looks like the best way to ensure that the future is everything we wish it would be is to maintain faith that that is the direction that will prevail. And I think that’s a form of moral commitment, when it’s not just naive optimism.
Lex Fridman
(02:22:00)
Well, Jordan, thank you for being courageous and being the light amid the darkness for many, many people. And thank you for once again talking today.
Jordan Peterson
(02:22:10)
Thanks very much for the invitation and for the conversation. It’s always a pleasure to see you. You’re doing a pretty decent job yourself about there, illuminating dark corners and bringing people upward. You’ve got a remarkable thing going with your podcast, and you’re very good at it.
Lex Fridman
(02:22:29)
Thank you, Jordan. Thanks for listening to this conversation with Jordan Peterson. To support this podcast please check out our sponsors in the description. And now let me leave you some words from Friedrich Nietzsche. “I would like to learn more to see as beautiful, that which is necessary in things. Then I shall be one of those who make things beautiful.” Thank you for listening, and hope to see you next time.

Transcript for Cursor Team: Future of Programming with AI | Lex Fridman Podcast #447

This is a transcript of Lex Fridman Podcast #447 with Cursor Team.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Introduction

Lex
(00:00:00)
The following is a conversation with the founding members of the Cursor team, Michael Truell, Sualeh Asif, Arvid Lunnemark, and Aman Sanger. Cursor is a code editor based on VS Code that adds a lot of powerful features for AI-assisted coding. It has captivated the attention and excitement of the programming and AI communities. So I thought this is an excellent opportunity to dive deep into the role of AI in programming. This is a super technical conversation that is bigger than just about one code editor. It’s about the future of programming and in general, the future of human AI collaboration in designing and engineering complicated and powerful systems. This is the Lex Fridman podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Michael, Sualeh, Arvid and Aman.

Code editor basics

Lex
(00:00:59)
All right, this is awesome. We have Michael, Aman, Sualeh, Arvid here from the Cursor team. First up, big ridiculous question. What’s the point of a code editor?
Michael
(00:01:10)
So the code editor is largely the place where you build software and today or for a long time, that’s meant the place where you text edit a formal programming language. And for people who aren’t programmers, the way to think of a code editor is a really souped up word processor for programmers, where the reason it’s souped up is code has a lot of structure. And so the “word processor,” the code editor can actually do a lot for you that word processors sort of in the writing space haven’t been able to do for people editing texts there.

(00:01:42)
And so that’s everything from giving you visual differentiation of the actual tokens in the code so you can scan it quickly to letting you navigate around the code base, sort of like you’re navigating around the internet with hyperlinks, you’re going to definitions of things you’re using to error checking to catch rudimentary bugs. And so traditionally that’s what a code editor has meant. And I think that what a code editor is is going to change a lot over the next 10 years as what it means to build software maybe starts to look a bit different.
Lex
(00:02:16)
I think also a code editor should just be fun.
Arvid
(00:02:19)
Yes, that is very important. That is very important. And it’s actually sort of an underrated aspect of how we decide what to build. A lot of the things that we build and then we try them out, we do an experiment and then we actually throw them out because they’re not fun. And so a big part of being fun is being fast a lot of the time. Fast is fun.
Lex
(00:02:42)
Yeah, fast is… That should be a T-shirt.
Michael
(00:02:48)
Fundamentally, I think one of the things that draws a lot of people to building stuff on computers is this insane iteration speed, where in other disciplines you might be sort of gate capped by resources or the ability… Even the ability to get a large group together and coding is this amazing thing where it’s you and the computer and that alone, you can build really cool stuff really quickly.

GitHub Copilot

Lex
(00:03:09)
So for people who don’t know, Cursor is this super cool new editor that’s a fork of VS Code. It would be interesting to get your explanation of your own journey of editors. I think all of you were big fans of VS Code with Copilot. How did you arrive to VS Code and how did that lead to your journey with Cursor?
Aman
(00:03:33)
Yeah, so I think a lot of us… Well, all of us were originally [inaudible 00:03:39] users.
Sualeh
(00:03:39)
Pure Vim.
Aman
(00:03:40)
Pure Vim. Yeah. No Neovim, just Pure Vim and a terminal. And at least for myself, it was around the time that Copilot came out, so 2021 that I really wanted to try it. So I went into VS Code, the only code editor in which it was available, and even though I really enjoyed using Vim, just the experience of Copilot with VS Code was more than good enough to convince me to switch. And so that kind of was the default until we started working on Cursor.
Lex
(00:04:14)
And maybe we should explain what Copilot does. It’s a really nice auto complete. As you start writing a thing, it suggests one or two or three lines how to complete the thing. And there’s a fun experience in that. You know like when you have a close friendship and your friend completes your sentences? When it’s done well, there’s an intimate feeling. There’s probably a better word than intimate, but there’s a cool feeling of holy shit, it gets me. And then there’s an unpleasant feeling when it doesn’t get you. And so there’s that kind of friction. But I would say for a lot of people, the feeling that it gets me overpowers that it doesn’t.
Arvid
(00:04:55)
And I think actually one of the underrated aspects of Github Copilot is that even when it’s wrong, it’s a little bit annoying, but it’s not that bad because you just type another character and then maybe then it gets you, or you type another character and then it gets you. So even when it’s wrong, it’s not that bad.
Sualeh
(00:05:09)
You can sort of iterate and fix it. I mean, the other underrated part of Copilot for me was just the first real AI product. So the first language model consumer product.
Lex
(00:05:21)
So Copilot was kind of like the first killer app for LMs.
Michael
(00:05:25)
Yeah. And the beta was out in 2021.
Lex
(00:05:29)
Right. Okay. So what’s the origin story of Cursor?
Michael
(00:05:34)
So around 2020, the scaling loss papers came out from OpenAI and that was a moment where this looked like clear predictable progress for the field where even if we didn’t have any more ideas, it looked like you could make these models a lot better if you had more compute and more data.
Lex
(00:05:49)
By the way, we’ll probably talk for three to four hours on the topic of scaling loss. But just to summarize, it’s a paper in a set of papers in a set of ideas that say bigger might be better for model size and data size in the realm of machine learning.
Sualeh
(00:06:05)
It’s bigger and better, but predictably better.
Lex
(00:06:08)
Okay, that’s another topic of conversation.
Arvid
(00:06:10)
Yes. Yeah.
Michael
(00:06:11)
So around that time for some of us, there were a lot of conceptual conversations about what’s this going to look like? What’s the story going to be for all these different knowledge worker fields about how they’re going to be made better by this technology getting better? And then I think there were a couple of moments where the theoretical gains predicted in that paper started to feel really concrete and it started to feel like a moment where you could actually go and not do a PhD if you wanted to do useful work in AI. It actually felt like now there was this whole set of systems one could build that were really useful. And I think that the first moment we already talked about a little bit, which was playing with the early beta of Copilot, that was awesome and magical.

(00:06:51)
I think that the next big moment where everything kind of clicked together was actually getting early access to GPT-IV. So it was sort of end of 2022 was when we were tinkering with that model and the step-upping capabilities felt enormous. And previous to that, we had been working on a couple of different projects. Because of Copilot, because of scaling odds, because of our prior interest in the technology, we had been tinkering around with tools for programmers, but things that are very specific. So we were building tools for financial professionals who have to work within a Jupyter Notebook or playing around with can you do static analysis with these models?

(00:07:29)
And then the step-up in GPT- IV felt like, look, that really made concrete the theoretical gains that we had predicted before. It felt like you could build a lot more just immediately at that point in time. And also if we were being consistent, it really felt like this wasn’t just going to be a point solution thing. This was going to be all of programming was going to flow through these models and it felt like that demanded a different type of programming environment, a different type of programming. And so we set off to build that sort of larger vision around then.
Sualeh
(00:07:59)
There’s one that I distinctly remember. So my roommate is an IMO Gold winner and there’s a competition in the US called the PUTNAM, which is sort of the IMO for college people and it’s this math competition. It’s exceptionally good. So Shengtong and Aman I remember, sort of June of 2022, had this bet on whether the 2024 June or July you were going to win a gold medal in the IMO with models.
Lex
(00:08:31)
IMO is the International Math Olympiad.
Sualeh
(00:08:33)
Yeah, IMO is International Math Olympiad. And so Arvid and I are both also competing in it. So it was sort of personal and I remember thinking, Matt, this is not going to happen. Even though I sort of believed in progress, I thought IMO Gold, Aman is delusional. And to be honest, I mean, I was, to be clear, very wrong. But that was maybe the most prescient bet in the group.
Lex
(00:09:05)
So the new results from DeepMind, it turned out that you were correct.
Arvid
(00:09:11)
Technically not.
Aman
(00:09:12)
Technically incorrect but one point away.
Michael
(00:09:15)
Aman was very enthusiastic about this stuff back then and before, Aman had this scaling loss T-shirt that he would wear around where it had the charts and the formulas on it.
Lex
(00:09:25)
So you felt the AGI or you felt the scaling loss.
Aman
(00:09:28)
Yeah, I distinctly remember there was this one conversation I had with Michael before I hadn’t thought super deeply and critically about scaling laws and he kind of posed the question, why isn’t scaling all you need or why isn’t scaling going to result in massive gains in progress? And I think I went through the stages of grief. There is anger, denial, and then finally at the end just thinking about it, acceptance. And I think I’ve been quite hopeful and optimistic about progress since. I think one thing I’ll caveat is I think it also depends on which domains you’re going to see progress. Math is a great domain especially formal theorem proving because you get this fantastic signal of actually verifying if the thing was correct. And so this means something like RL can work really, really well and I think you could have systems that are perhaps very superhuman in math and still not technically have AGI.

Cursor

Lex
(00:10:27)
Okay, so can we take it all the way to Cursor. And what is Cursor? It’s a fork of VS Code and VS Code is one of the most popular editors for a long time. Everybody fell in love with it. Everybody left Vim, I left DMAX for it. Sorry. So unified in some fundamental way the developer community. And then you look at the space of things, you look at the scaling laws, AI is becoming amazing and you decided okay, it’s not enough to just write an extension via VS Code because there’s a lot of limitations to that. If AI is going to keep getting better and better and better, we need to really rethink how the AI is going to be part of the editing process. And so you decided to fork VS Code and start to build a lot of the amazing features we’ll be able to talk about. But what was that decision like? Because there’s a lot of extensions, including Copilot, of VS Code that are doing sort of AI type stuff. What was the decision like to just fork VS Code?
Michael
(00:11:33)
So the decision to do an editor seemed kind of self-evident to us for at least what we wanted to do and achieve because when we started working on the editor, the idea was these models are going to get much better, their capabilities are going to improve and it’s going to entirely change how you build software, both in a you will have big productivity gains but also radical and now the active building software is going to change a lot. And so you’re very limited in the control you have over a code editor if you’re a plugin to an existing coding environment and we didn’t want to get locked in by those limitations. We wanted to be able to just build the most useful stuff.
Lex
(00:12:08)
Okay. Well then the natural question is, VS Code is kind of with Copilot a competitor, so how do you win? Is it basically just the speed and the quality of the features?
Aman
(00:12:20)
Yeah, I mean I think this is a space that is quite interesting, perhaps quite unique where if you look at previous tech waves, maybe there’s kind of one major thing that happened and it unlocked a new wave of companies, but every single year, every single model capability or jump you get in model capabilities, you now unlock this new wave of features, things that are possible, especially in programming. And so I think in AI programming, being even just a few months ahead, let alone a year ahead makes your product much, much, much more useful. I think the Cursor a year from now will need to make the Cursor of today look obsolete. And I think Microsoft has done a number of fantastic things, but I don’t think they’re in a great place to really keep innovating and pushing on this in the way that a startup can.
Lex
(00:13:13)
Just rapidly implementing features.
Aman
(00:13:16)
Yeah. And kind of doing the research experimentation necessary to really push the ceiling.
Sualeh
(00:13:24)
I don’t know if I think of it in terms of features as I think of it in terms of capabilities for programmers. As the new O1 model came out, and I’m sure there are going to be more models of different types, like longer context and maybe faster, there’s all these crazy ideas that you can try and hopefully 10% of the crazy ideas will make it into something kind of cool and useful and we want people to have that sooner. To rephrase, an underrated fact is we’re making it for ourself.

(00:13:59)
When we started Cursor, you really felt this frustration that models… You could see models getting better, but the Copilot experience had not changed. It was like, man, these guys, the ceiling is getting higher, why are they not making new things? They should be making new things. Where’s all the alpha features? There were no alpha features. I’m sure it was selling well. I’m sure it was a great business, but it didn’t feel… I’m one of these people that really want to try and use new things and there was no new thing for a very long while.
Lex
(00:14:35)
Yeah, it’s interesting. I don’t know how you put that into words, but when you compare a Cursor with Copilot, Copilot pretty quickly started to feel stale for some reason.
Arvid
(00:14:45)
Yeah, I think one thing that I think helps us is that we’re sort of doing it all in one where we’re developing the UX and the way you interact with the model at the same time as we’re developing how we actually make the model give better answers. So how you build up the prompt or how do you find the context and for a Cursor Tab, how do you train the model? So I think that helps us to have all of it the same people working on the entire experience [inaudible 00:15:17] .
Sualeh
(00:15:17)
Yeah, it’s like the person making the UI and the person training the model sit like 18 feet away-
Aman
(00:15:24)
Often the same person even.
Sualeh
(00:15:25)
Yeah, often even the same person. You can create things that are sort of not possible if you’re not talking, you’re not experimenting.
Lex
(00:15:34)
And you’re using, like you said, Cursor to write Cursor?
Arvid
(00:15:37)
Of course.
Michael
(00:15:37)
Oh yeah.
Lex
(00:15:38)
Well let’s talk about some of these features. Let’s talk about the all-knowing the all-powerful praise be to the Tab, auto complete on steroids basically. So how does Tab work? What is Tab?
Michael
(00:15:53)
To highlight and summarize at a high level, I’d say that there are two things that Cursor is pretty good at right now. There are other things that it does, but two things that it helps programmers with. One is this idea of looking over your shoulder and being a really fast colleague who can kind of jump ahead of you and type and figure out what you’re going to do next. And that was the original idea behind… That was kind of the kernel of the idea behind a good auto complete was predicting what you’re going to do next, but you can make that concept even more ambitious by not just predicting the characters after your Cursor but actually predicting the next entire change you’re going to make, the next diff, next place you’re going to jump to.

(00:16:35)
And the second thing Cursor is pretty good at right now too is helping you sometimes jump ahead of the AI and tell it what to do and go from instructions to code. And on both of those we’ve done a lot of work on making the editing experience for those things ergonomic and also making those things smart and fast.

Cursor Tab

Sualeh
(00:16:54)
One of the things we really wanted was we wanted the model to be able to edit code for us. That was kind of a wish and we had multiple attempts at it before we had a good model that could edit code for you. Then after we had a good model, I think there’ve been a lot of effort to make the inference fast for having a good experience, and we’ve been starting to incorporate… I mean, Michael sort of mentioned this ability to jump to different places and that jump to different places I think came from a feeling of once you accept an edit, it’s like man, it should be just really obvious where to go next. It’s like I’d made this change, the model should just know that the next place to go to is 18 lines down. If you’re a WIM user, you could press 18JJ or whatever, but why am I doing this? The model should just know it.

(00:17:54)
So the idea was you just press Tab, it would go 18 lines down and then show you the next edit and you would press Tab, so as long as you could keep pressing Tab. And so the internal competition was, how many Tabs can we make someone press? Once you have the idea, more abstractly, the thing to think about is how are the edits zero entropy? So once you’ve expressed your intent and the edit is… There’s no new bits of information to finish your thought, but you still have to type some characters to make the computer understand what you’re actually thinking, then maybe the model should just sort of read your mind and all the zero entropy bits should just be like tabbed away. That was sort of the abstract version.
Aman
(00:18:42)
There’s this interesting thing where if you look at language model loss on different domains, I believe the bits per byte, which is a kind of character normalize loss for code is lower than language, which means in general there are a lot of tokens in code that are super predictable, a lot of characters that are super predictable. And this is I think even magnified when you’re not just trying to auto complete code, but predicting what the user’s going to do next in their editing of existing code. And so the goal of Cursor Tab is let’s eliminate all the low entropy actions you take inside of the editor. When the intent is effectively determined, let’s just jump you forward in time, skip you forward.
Lex
(00:19:22)
Well, what’s the intuition and what’s the technical details of how to do next Cursor prediction? That jump, that’s not so intuitive I think to people.
Aman
(00:19:31)
Yeah. I think I can speak to a few of the details on how to make these things work. They’re incredibly low latency, so you need to train small models on this task. In particular, they’re incredibly pre-fill token hungry. What that means is they have these really, really long prompts where they see a lot of your code and they’re not actually generating that many tokens. And so the perfect fit for that is using a sparse model, meaning an MOE model. So that was one breakthrough we made that substantially improved its performance at longer context. The other being a variant of speculative decoding that we built out called speculative edits. These are two, I think, important pieces of what make it quite high quality and very fast.
Lex
(00:20:20)
Okay, so MOE [inaudible 00:20:22], the input is huge, the output is small.
Aman
(00:20:24)
Yeah.
Lex
(00:20:25)
Okay. So what else can you say about how to make… Does caching play a role-
Aman
(00:20:30)
Oh, caching plays a huge role. Because you’re dealing with this many input tokens, if every single keystroke that you’re typing in a given line you had to rerun the model on all of those tokens passed in, you’re just going to one, significantly degrade latency, two, you’re going to kill your GPUs with load. So you need to design the actual prompts you use for the model such that they’re caching aware. And then yeah, you need to reuse the KV cache across requests just so that you’re spending less work, less compute.
Lex
(00:21:04)
Again, what are the things that Tab is supposed to be able to do in the near term, just to linger on that? Generate code, fill empty space, also edit code across multiple lines and then jump to different locations inside the same file and then-
Sualeh
(00:21:25)
Hopefully jump to different files also. So if you make an edit in one file and maybe you have to go to another file to finish your thought, it should go to the second file also.
Arvid
(00:21:36)
The full generalization is next action prediction. Sometimes you need to run a command in the terminal and it should be able to suggest the command based on the code that you wrote too, or sometimes you actually need to… It suggests something, but it’s hard for you to know if it’s correct because you actually need some more information to learn. You need to know the type to be able to verify that it’s correct. And so maybe it should actually take you to a place that’s the definition of something and then take you back so that you have all the requisite knowledge to be able to accept the next completion.
Lex
(00:22:13)
So providing the human the knowledge.
Arvid
(00:22:15)
Yes.
Lex
(00:22:17)
Right.
Arvid
(00:22:17)
Yeah.
Lex
(00:22:19)
I just gotten to know a guy named Primeagen who I believe has an… You can order coffee via SSH.
Aman
(00:22:28)
Oh yeah.
Arvid
(00:22:29)
We did that.
Sualeh
(00:22:30)
We did that.
Lex
(00:22:31)
So can also the model do that and provide you with caffeine? Okay. So that’s the general framework.
Michael
(00:22:39)
Yeah. And the magic moment would be if… Programming is this weird discipline where sometimes the next five minutes, not always, but sometimes the next five minutes of what you’re going to do is actually predictable from the stuff you’ve done recently. And so can you get to a world where that next five minutes either happens by you disengaging and it taking you through? Or maybe a little bit more of just you seeing next step what it’s going to do and you’re like, okay, that’s good, that’s good, that’s good, that’s good, and you can just sort of tap, tap through these big changes.

Code diff

Lex
(00:23:09)
As we’re talking about this, I should mention one of the really cool and noticeable things about Cursor is that there’s this whole diff interface situation going on. So the model suggests with the red and the green of here’s how we’re going to modify the code, and in the chat window you can apply and it shows you the diff and you can accept the diff. So maybe can you speak to whatever direction of that?
Sualeh
(00:23:32)
We’ll probably have four or five different kinds of diffs. So we have optimized the diff for the auto complete, so that has a different diff interface than when you’re reviewing larger blocks of code. And then we’re trying to optimize another diff thing for when you’re doing multiple different files. And at a high level, the difference is for when you’re doing auto- complete, it should be really, really fast to read. Actually it should be really fast to read in all situations, but in auto-complete your eyes are focused in one area, you can’t be in too many… The humans can’t look in too many different places.
Lex
(00:24:15)
So you’re talking about on the interface side?
Sualeh
(00:24:17)
On the interface side. So it currently has this box on this side. So we have the current box, and it you tries to delete code in some place and tries to add other code, it tries to show you a box on the side.
Aman
(00:24:28)
You can maybe show it if we pull it up in Cursor.com. This is what we’re talking.
Sualeh
(00:24:33)
So that box-
Aman
(00:24:33)
Exactly here.
Sualeh
(00:24:35)
It was like three or four different attempts at trying to make this thing work where first the attempt was this blue crossed out line. So before it was a box on the side, it used to show you the code to delete by showing you Google Docs style, you would see a line through it and then you would see the new code. That was super distracting. And then we tried many different… There was deletions, there was trying the red highlight.

(00:25:05)
Then the next iteration of it, which is sort of funny, you would hold the, on Mac, the option button. So it would sort of highlight a region of code to show you that there might be something coming. So maybe in this example, the input and the value would all get blue. And the blue was to highlight that the AI had a suggestion for you. So instead of directly showing you the thing, it would just hint that the AI had a suggestion and if you really wanted to see it, you would hold the option button and then you would see the new suggestion. And if you release the option button, you would then see your original code.
Lex
(00:25:47)
So by the way, that’s pretty nice, but you have to know to hold the option button.
Aman
(00:25:51)
Yeah.
Lex
(00:25:51)
And by the way, I’m not a Mac user, but I got it. Option. It’s a button I guess you people have.
Sualeh
(00:26:00)
Again, it’s just not intuitive. I think that’s the key thing.
Aman
(00:26:03)
And there’s a chance this is also not the final version of it.
Arvid
(00:26:07)
I am personally very excited for making a lot of improvements in this area. We often talk about it as the verification problem where these diffs are great for small edits. For large edits or when it’s multiple files or something, it’s actually a little bit prohibitive to review these diffs. So there are a couple of different ideas here. One idea that we have is, okay, parts of the diffs are important. They have a lot of information. And then parts of the diff are just very low entropy. They’re the same thing over and over again. And so maybe you can highlight the important pieces and then gray out the not so important pieces. Or maybe you can have a model that looks at the diff and sees, oh, there’s a likely bug here. I will mark this with a little red squiggly and say, you should probably review this part of the diff. Ideas in that vein I think are exciting.
Lex
(00:27:11)
Yeah, that’s a really fascinating space of UX design engineering. So you’re basically trying to guide the human programmer through all the things they need to read and nothing more, optimally.
Arvid
(00:27:25)
And you want an intelligent model to do it. Currently, diff algorithms, they’re just like normal algorithms. There’s no intelligence. There’s intelligence that went into designing the algorithm, but then you don’t care if it’s about this thing or this thing as you want the model to do this.
Sualeh
(00:27:47)
So I think the general question is like, man, these models are going to get much smarter. As the models get much smarter, changes they will be able to propose are much bigger. So as the changes gets bigger and bigger and bigger, the humans have to do more and more and more verification work. It gets more and more and more… You need to help them out. I don’t want to spend all my time reviewing code.
Lex
(00:28:15)
Can you say a little more across multiple files [inaudible 00:28:19]?
Aman
(00:28:20)
Yeah. I mean, so GitHub tries to solve this with code review. When you’re doing code review, you’re reviewing multiple diffs across multiple files. But like Arvid said earlier, I think you can do much better than code review. Code review kind of sucks. You spend a lot of time trying to grok this code that’s often quite unfamiliar to you and it often doesn’t even actually catch that many bugs. And I think you can significantly improve that review experience using language models, for example, using the kinds of tricks that Arvid had described of maybe pointing you towards the regions that actually matter. I think also if the code is produced by these language models and it’s not produced by someone else… The code review experience is design for both the reviewer and the person that produced the code. In the case where the person that produced the code is a language model, you don’t have to care that much about their experience and you can design the entire thing around the reviewer such that the reviewer’s job is as fun, as easy, as productive as possible. I think that feels like the issue with just naively trying to make these things look like code review. I think you can be a lot more creative and push the boundary on what’s possible.
Arvid
(00:29:43)
And just one idea there is, I think ordering matters. Generally, when you review a PR, you have this list of files and you’re reviewing them from top to bottom, but actually, you actually want to understand this part first because that came logically first, and then you want to understand the next part and you don’t want to have to figure out that yourself, you want a model to.
Arvid
(00:30:00)
And you don’t want to have to figure out that yourself. You want a model to guide you through the thing.
Lex
(00:30:06)
And is the step of creation going to be more and more natural language, is the goal versus with actual writing the book?
Arvid
(00:30:12)
I think sometimes. I don’t think it’s going to be the case that all of programming will be natural language, and the reason for that is if I’m pair programming with Sualeh and Sualeh is at the computer and the keyboard, and sometimes if I’m driving, I want to say to Sualeh, “Hey, implement this function,” and that works. And then sometimes it’s just so annoying to explain to Sualeh what I want him to do, and so I actually take over the keyboard and I show him. I write part of the example and then it makes sense and that’s the easiest way to communicate. And so I think that’s also the case for AI. Sometimes the easiest way to communicate with the AI will be to show an example and then it goes and does the thing everywhere else.

(00:30:54)
Or sometimes if you’re making a website for example, the easiest way to show to the AI what you want is not to tell it what to do but drag things around or draw things, and maybe eventually we will get to brain machine interfaces or whatever and you can understand what you’re thinking. And so I think natural language will have a place. I think it will definitely not be the way most people program most of the time.

ML details

Lex
(00:31:20)
I’m really feeling the AGI with this editor. It feels like there’s a lot of machine learning going on underneath. Tell me about some of the ML stuff that makes it all work?
Aman
(00:31:31)
Where Cursor really works via this ensemble of custom models that we’ve trained alongside the frontier models that are fantastic at the reasoning intense things. And so Cursor Tab for example, is a great example of where you can specialize this model to be, even better than even frontier models if you look at evals on the task we set it at. The other domain, which it’s surprising that it requires custom models but it’s necessary and works quite well, is in Apply. So I think these models are… The frontier models are quite good at sketching out plans for code and generating rough sketches of the change, but actually, creating diffs is quite hard for frontier models, for your training models. You try to do this with Sonnet, with o1, any frontier model and it really messes up stupid things like counting line numbers, especially in super, super large files. And so what we’ve done to alleviate this is we let the model sketch out this rough code block that indicates what the change will be and we train a model to then Apply that change to the file.
Lex
(00:32:42)
And we should say that Apply is the model looks at your code, it gives you a really damn good suggestion of what new things to do. And the seemingly for humans trivial step of combining the two, you’re saying is not so trivial.
Sualeh
(00:32:59)
Contrary to popular perception, it is not a deterministic algorithm.
Aman
(00:33:03)
Yeah, I think you see shallow copies of apply elsewhere and it just breaks most of the time because you think you can try to do some deterministic matching and then it fails at least 40% of the time and that just results in a terrible product experience. I think in general, this regime of you are going to get smarter and smarter models. So one other thing that Apply lets you do is it lets you use fewer tokens with the most intelligent models. This is both expensive in terms of latency for generating all these tokens and cost. So you can give this very, very rough sketch and then have your model models go and implement it because it’s a much easier task to implement this very, very sketched out code. And I think that this regime will continue where you can use smarter and smarter models to do the planning and then maybe the implementation details can be handled by the less intelligent ones. Perhaps you’ll have maybe o1, maybe it’ll be even more capable models given an even higher level plan that is recursively applied by sauna and then the apply model.
Sualeh
(00:34:16)
Maybe we should talk about how to make it fast if you like. Fast is always an interesting detail.
Arvid
(00:34:21)
Fast is good.
Lex
(00:34:22)
Yeah, how do you make it fast?
Aman
(00:34:25)
Yeah, so one big component of making it fast is speculative edits. So speculative edits are a variant of speculative decoding, and maybe it’d be helpful to briefly describe speculative decoding. With speculative decoding, what you do is you can take advantage of the fact that most of the time, and I’ll add the caveat that it would be when you’re memory bound in language model generation, if you process multiple tokens at once, it is faster than generating one token at a time. So this is the same reason why if you look at tokens per second with prompt tokens versus generated tokens, it’s much much faster for prompt tokens.

(00:35:09)
So what we do is instead of using what speculative decoding normally does, which is using a really small model to predict these draft tokens that your larger model will then go in and verify, with code edits, we have a very strong prior of what the existing code will look like and that prior is literally the same exact code. So you can do is you can just feed chunks of the original code back into the model, and then the model will just pretty much agree most of the time that, “Okay, I’m just going to spit this code back out.” And so you can process all of those lines in parallel and you just do this with sufficiently many chunks. And then eventually you’ll reach a point of disagreement where the model will now predict text that is different from the ground truth original code. It’ll generate those tokens and then we will decide after enough tokens match the original code to re- start speculating in chunks of code.

(00:36:02)
What this actually ends up looking like is just a much faster version of normal editing code. So it looks like a much faster version of the model rewriting all the code. So we can use the same exact interface that we use for diffs, but it will just stream down a lot faster.
Sualeh
(00:36:21)
And then the advantage is that while it’s streaming, you can just also start reviewing the code before it’s done so there’s no big loading screen. Maybe that is part of the advantage.
Lex
(00:36:36)
So the human can start reading before the thing is done.
Sualeh
(00:36:39)
I think the interesting riff here is something like… I feel like speculation is a fairly common idea nowadays. It’s not only in language models. There’s obviously speculation in CPUs and there’s speculation for databases and there’s speculation all over the place.

GPT vs Claude

Lex
(00:36:54)
Well, let me ask the ridiculous question of which LLM is better at coding? GPT, Claude, who wins in the context of programming? And I’m sure the answer is much more nuanced because it sounds like every single part of this involves a different model.
Aman
(00:37:12)
I think there’s no model that Pareto dominates others, meaning it is better in all categories that we think matter, the categories being speed, ability to edit code, ability to process lots of code, long context, a couple of other things and coding capabilities. The one that I’d say right now is just net best is Sonnet. I think this is a consensus opinion. o1’s really interesting and it’s really good at reasoning. So if you give it really hard programming interview style problems or lead code problems, it can do quite well on them, but it doesn’t feel like it understands your rough intent as well as Sonnet does. If you look at a lot of the other frontier models, one qualm I have is it feels like they’re not necessarily over… I’m not saying they train on benchmarks, but they perform really well in benchmarks relative to everything that’s in the middle. So if you tried on all these benchmarks and things that are in the distribution of the benchmarks they’re evaluated on, they’ll do really well. But when you push them a little bit outside of that, Sonnet is I think the one that does best at maintaining that same capability. You have the same capability in the benchmark as when you try to instruct it to do anything with coding.
Lex
(00:38:38)
Another ridiculous question is the difference between the normal programming experience versus what benchmarks represent? Where do benchmarks fall short, do you think, when we’re evaluating these models?
Sualeh
(00:38:49)
By the way, that’s a really, really hard, critically important detail of how different benchmarks are versus real coding, where real coding, it’s not interview style coding. Humans are saying half-broken English sometimes and sometimes you’re saying, “Oh, do what I did before.” Sometimes you’re saying, “Go add this thing and then do this other thing for me and then make this UI element.” And then it’s just a lot of things are context dependent. You really want to understand the human and then do what the human wants, as opposed to this… Maybe the way to put it abstractly is the interview problems are very well specified. They lean a lot on specification while the human stuff is less specified.
Michael
(00:39:50)
I think that this benchmark question is both complicated by what Sualeh just mentioned, and then also what Aman was getting into is that even if you… There’s this problem of the skew between what can you actually model in a benchmark versus real programming, and that can be sometimes hard to encapsulate because it’s real programming’s very messy and sometimes things aren’t super well specified what’s correct or what isn’t. But then it’s also doubly hard because of this public benchmark problem. And that’s both because public benchmarks are sometimes hill climbed on, then it’s really, really hard to also get the data from the public benchmarks out of the models.

(00:40:28)
And so for instance, one of the most popular agent benchmarks, SWE-Bench, is really, really contaminated in the training data of these foundation models. And so if you ask these foundation models to do a SWE-Bench problem, but you actually don’t give them the context of a code base, they can hallucinate the right file pass, they can hallucinate the right function names. And so it’s also just the public aspect of these things is tricky.
Aman
(00:40:53)
In that case, it could be trained on the literal issues or pull requests themselves, and maybe the labs will start to do a better job or they’ve already done a good job at decontaminating those things, but they’re not going to omit the actual training data of the repository itself. These are all some of the most popular Python repositories. SimPy is one example. I don’t think they’re going to handicap their models on SimPy and all these popular Python repositories in order to get true evaluation scores in these benchmarks.
Michael
(00:41:24)
I think that given the dirts in benchmarks, there have been a few interesting crutches that places that build systems with these models or build these models actually use to get a sense of are they going the right direction or not. And in a lot of places, people will actually just have humans play with the things and give qualitative feedback on these. One or two of the foundation model companies, they have people who that’s a big part of their role. And internally, we also qualitatively assess these models and actually lean on that a lot in addition to private emails that we have.
Arvid
(00:41:56)
It’s like the vibe.
Lex
(00:41:57)
The vibe, yeah, the vibe.
Arvid
(00:41:59)
It’s like the vibe.
Lex
(00:42:00)
The vibe benchmark, human benchmark, the humans. You pull in the humans to do a vibe check.
Arvid
(00:42:05)
Yeah.
Lex
(00:42:06)
Okay. That’s what I do. Just reading online forums and Reddit and X. Well, I don’t know how to properly load in people’s opinions because they’ll say things like, “I feel like Claude or GPT has gotten dumber,” or something. They’ll say, “I feel like…” And then I sometimes feel like that too, but I wonder if it’s the model’s problem or mine.
Aman
(00:42:34)
With Claude, there’s an interesting take I heard where I think AWS has different chips and I suspect they have slightly different numerics than Nvidia GPUs, and someone speculated that Claude’s degraded performance had to do with maybe using the quantized version that existed on AWS Bedrock versus whatever was running on Anthropics GPUs.
Lex
(00:43:00)
I interview a bunch of people that have conspiracy theories. I’m glad you spoke to this conspiracy.
Sualeh
(00:43:06)
Well, it’s not like conspiracy theory as much as humans. Humans are humans and there’s these details-
Lex
(00:43:14)
Yes.
Sualeh
(00:43:14)
And you’re doing this queasy amount of flops and chips are messy and man, you can just have bugs. It’s hard to overstate how hard bugs are to avoid.

Prompt engineering

Lex
(00:43:28)
What’s the role of a good prompt in all of this? We mentioned that benchmarks have really structured, well-formulated prompts. What should a human be doing to maximize success and what’s the importance of what the humans… You wrote a blog post on… You called it Prompt Design.
Arvid
(00:43:50)
Yeah, I think it depends on which model you’re using, and all of them are slightly different and they respond differently to different prompts, but I think the original GPT-4 and the original [inaudible 00:44:07] models last year, they were quite sensitive to the prompts, and they also had a very small context window. And so we have all of these pieces of information around the code base that would maybe be relevant in the prompt. You have the docs, you have the files that you add, you have the conversation history, and then there’s a problem like how do you decide what you actually put in the prompt and when you have a limited space? And even for today’s models, even when you have long context, filling out the entire context window means that it’s slower. It means that sometimes the model actually gets confused and some models get more confused than others.

(00:44:43)
And we have this one system internally that we call Preempt, which helps us with that a little bit. And I think it was built for the era before where we had 8,000 token contact windows. And it’s a little bit similar to when you’re making a website. You want it to work on mobile, you want it to work on a desktop screen, and you have this dynamic information which you don’t have. For example, if you’re designing a print magazine, you know exactly where you can put stuff. But when you have a website or when you have a prompt, you have these inputs and then you need to format them to always work, even if the input is really big, then you might have to cut something down. And so the idea was, okay, let’s take some inspiration. What’s the best way to design websites? Well, the thing that we really like is React and the declarative approach where you use JSX in JavaScript, and then you declare, “This is what I want and I think this has higher priority or this has higher Z index than something else.”

(00:45:56)
And then you have this rendering engine in web design. It’s like Chrome, and in our case it’s a preempt renderer, which then fits everything onto the page. And as you declare, decide what you want and then it figures out what you want. And so we have found that to be quite helpful and I think the role of it has shifted over time where initially it was to fit to these small context windows. Now it’s really useful because it helps us with splitting up the data that goes into the prompt and the actual rendering of it. And so it’s easier to debug because you can change the rendering of the prompt and then try it on old prompts because you have the raw data that went into the prompt, and then you can see, “Did my change actually improve it for this entire eval set?”
Lex
(00:46:45)
So do you literally prompt with JSX?
Aman
(00:46:48)
Yes. Yes.
Arvid
(00:46:48)
Yeah. So it looks like react. There are components. We have one component that’s a file component and it takes in the cursor. Usually there’s one line where the cursor is in your file and that’s probably the most important line because that’s the one you’re looking at. And so then you can give priorities. So that line has the highest priority, and then you subtract one for every line that is farther away. And then eventually when it’s rendered, it figures out how many lines can actually fit and it centers around that thing.
Lex
(00:47:17)
That’s amazing.
Aman
(00:47:18)
And you can do other fancy things where if you have lots of code blocks from the entire code base, you could use retrieval and things like embedding and re-ranking scores to add priorities for you through these components.
Lex
(00:47:30)
So should humans when they ask questions, also try to use something like that? Would it be beneficial to write JSX in the problem or the whole idea is this should be loose and messy?
Arvid
(00:47:43)
I think our goal is that you should just do whatever is the most natural thing for you, and then our job is to figure out how do we actually retrieve the relative event things so that your thinking actually makes sense?
Lex
(00:47:56)
Well, this is the discussion I had with Aravind of Perplexity is his whole idea is you should let the person be as lazy as he wants. That’s a beautiful thing, but I feel like you’re allowed to ask more of programmers, right?
Arvid
(00:48:14)
Yes.
Lex
(00:48:14)
So if you say, “Just do what you want,” humans are lazy. There’s a tension between just being lazy versus provide more as be prompted… Almost like the system pressuring you or inspiring you to be articulate. Not in terms of the grammar of the sentences, but in terms of the depth of thoughts that you convey inside the prompts.
Aman
(00:48:39)
I think even as a system gets closer to some level of perfection, often when you ask the model for something, not enough intent is conveyed to know what to do. And there are a few ways to resolve that intent. One is the simple thing of having the model just ask you, “I’m not sure how to do these parts based on your query. Could you clarify that?” I think the other could be maybe if there are five or six possible generations, “Given the uncertainty present in your query so far, why don’t we just actually show you all of those and let you pick them?”
Lex
(00:49:19)
How hard is it for the model to choose to talk back versus generally… It’s hard, how deal with the uncertainty. Do I choose to ask for more information to reduce the ambiguity?
Sualeh
(00:49:36)
So one of the things we do, it’s like a recent addition, is try to suggest files that you can add. And while you’re typing, one can guess what the uncertainty is and maybe suggest that maybe you’re writing your API and we can guess using the commits that you’ve made previously in the same file that the client and the server is super useful and there’s a hard technical problem of how do you resolve it across all commits? Which files are the most important given your current prompt? And we’re still initial version is ruled out and I’m sure we can make it much more accurate. It’s very experimental, but then the idea is we show you, do you just want to add this file, this file, this file also to tell the model to edit those files for you?

(00:50:37)
Because if maybe you’re making the API, you should also edit the client and the server that is using the API and the other one resolving the API. So that would be cool as both there’s the phase where you’re writing a prompt and there’s… Before you even click, “Enter,” maybe we can help resolve some of the uncertainty.

AI agents

Lex
(00:50:54)
To what degree do you use agentic approaches? How useful are agents?
Arvid
(00:50:59)
We think agents are really, really cool.
Lex
(00:50:59)
Okay.
Arvid
(00:51:03)
I think agents, it’s like resembles like a human… You can feel that you’re getting closer to AGI because you see a demo where it acts as a human would and it’s really, really cool. I think agents are not yet super useful for many things. I think we’re getting close to where they will actually be useful. And so I think there are certain types of tasks where having an agent would be really nice. I would love to have an agent. For example, if we have a bug where you sometimes can’t Command+C and Command+V inside our chat input box, and that’s a task that’s super well specified. I just want to say in two sentences, “This does not work, please fix it.” And then I would love to have an agent that just goes off, does it, and then a day later, I come back and I review the thing.
Lex
(00:52:02)
You mean it goes, finds the right file?
Arvid
(00:52:05)
Yeah, it finds the right files, it tries to reproduce the bug, it fixes the bug and then it verifies that it’s correct. And this could be a process that takes a long time. And so I think I would love to have that. And then I think a lot of programming, there is often this belief that agents will take over all of programming. I don’t think we think that that’s the case because a lot of programming, a lot of the value is in iterating, or you don’t actually want to specify something upfront because you don’t really know what you want until you have seen an initial version and then you want to iterate on that and then you provide more information.

(00:52:43)
And so for a lot of programming, I think you actually want a system that’s instant, that gives you an initial version instantly back and then you can iterate super, super quickly.
Lex
(00:52:52)
What about something like that recently came out, replica agent, that does also setting up the development environment and solving software packages, configuring everything, configuring the databases and actually deploying the app. Is that also in the set of things you dream about?
Arvid
(00:53:09)
I think so. I think that would be really cool. For certain types of programming, it would be really cool.
Lex
(00:53:15)
Is that within scope of Cursor?
Arvid
(00:53:17)
Yeah, we aren’t actively working on it right now, but it’s definitely… We want to make the programmer’s life easier and more fun and some things are just really tedious and you need to go through a bunch of steps and you want to delegate that to an agent. And then some things you can actually have an agent in the background while you’re working. Let’s say you have a PR that’s both backend and frontend, and you’re working in the frontend and then you can have a background agent that doesn’t work and figure out what you’re doing. And then when you get to the backend part of your PR, then you have some initial piece of code that you can iterate on. And so that would also be really cool.
Lex
(00:53:58)
One of the things we already talked about is speed, but I wonder if we can just linger on that some more in the various places that the technical details involved in making this thing really fast. So every single aspect of Cursor, most aspects of Cursor feel really fast. Like I mentioned, the Apply is probably the slowest thing. And for me from… I’m sorry, the pain on Arvid’s face as I say that.
Arvid
(00:54:22)
I know. It’s a pain. It’s a pain that we’re feeling and we’re working on fixing it.
Lex
(00:54:27)
Yeah, it says something that feels… I don’t know what it is, like one second or two seconds, that feels slow. That means that actually shows that everything else is just really, really fast. So is there some technical details about how to make some of these models, how to make the chat fast, how to make the diffs fast? Is there something that just jumps to mind?
Aman
(00:54:49)
Yeah. So we can go over a lot of the strategies that we use. One interesting thing is cache warming. And so what you can do is if as the user’s typing, you can have… You’re probably going to use some piece of context and you can know that before the user’s done typing. So as we discussed before, reusing the KV cache results in lower latency, lower costs, cross requests. So as the user starts typing, you can immediately warm the cache with let’s say the current file contents, and then when they press enter, there’s very few tokens it actually has to pre-fill and compute before starting the generation. This will significantly lower TTFT.
Lex
(00:55:30)
Can you explain how KV cache works?
Aman
(00:55:33)
Yeah, so the way transformers work.
Lex
(00:55:37)
I like it.
Aman
(00:55:41)
One of the mechanisms that allow transformers to not just independently… The mechanism that allows transformers to not just independently look at each token, but see previous tokens are the keys and values to attention. And generally, the way attention works is you have at your current token some query, and then you’ve all the keys and values of all your previous tokens, which are some kind of representation that the model stores internally of all the previous tokens in the prompt. And by default, when you’re doing a chat, the model has to, for every single token, do this forward pass through the entire model. That’s a lot of matrix multiplies that happen, and that is really, really slow.

(00:56:23)
Instead, if you have already done that and you stored the keys and values and you keep that in the GPU, then when I… Let’s say I have to sort it for the last N tokens. If I now want to compute the output token for the N+1nth token, I don’t need to pass those first N tokens through the entire model because I already have all those keys and values. And so you just need to do the forward pass through that last token. And then when you’re doing attention, you’re reusing those keys and values that have been computed, which is the only kind of sequential part or sequentially dependent part of the transformer.
Lex
(00:56:59)
Is there higher level caching of caching of the prompts or that kind of stuff that could help?
Aman
(00:57:05)
I see. Yeah. There’s other types of caching you can do. One interesting thing that you can do for Cursor Tab is you can basically predict ahead as if the user would’ve accepted the suggestion and then trigger another request. And so then you’ve cached, you’ve done the speculative. It’s a mix of speculation and caching, right? Because speculating what would happen if they accepted it. And then you have this value that is cached this suggestion. And then when they press tab, the next one would be waiting for them immediately. It’s a clever heuristic/trick that uses a higher level caching and can give the… It feels fast despite there not actually being any changes in the model.
Sualeh
(00:57:54)
And if you can make the KV cache smaller, one of the advantages you get is like maybe you can speculate even more. Maybe you can guess, “Here’s the 10 things that could be useful, predict the next 10,” and then it’s possible the user hits the one of the 10. It’s much higher chance than the user hits the exact one that you showed them. Maybe they type in other character and hit something else in the cache. So there’s all these tricks where… The general phenomena here is, I think it’s also super useful for RL is maybe a single sample from the model isn’t very good, but if you predict 10 different things, turns out that one of the 10 that’s right is the probability is much higher. There’s these passive K curves and part of RL, what RL does is you can exploit this passive K phenomena to make many different predictions.

(00:58:53)
And one way to think about this, the model knows internally has some uncertainty over which of the key things is correct or which of the key things does the human wants? When we RL our Cursor Tab model, one of the things we’re doing is we’re predicting which of the 100 different suggestions the model produces is more amenable for humans? Which of them do humans more like than other things? Maybe there’s something where the model can predict very far ahead versus a little bit, maybe somewhere in the middle. And then you can give a reward to the things that humans would like more and punish the things that it would like, and then train the model to output the suggestions that humans would like more. You have these RL loops that are very useful that exploit these passive K curves. Aman, maybe can go into even more detail.
Aman
(00:59:48)
Yeah, it is a little different than speed, but technically, you tie it back in because you can get away with the smaller model if you RL your smaller model and it gets the same performance as the bigger one.
Aman
(01:00:00)
… as the bigger one. So while I was mentioning stuff about KV, about reducing the size of your KV cache, there are other techniques there as well that are really helpful for speed. So kind of back in the day, all the way two years ago, people mainly use multi-head attention, and I think there’s been a migration towards more efficient attention schemes like group query or multi-query attention, and this is really helpful for then with larger batch sizes being able to generate the tokens much faster. The interesting thing here is this now has no effect on that time to first token pre-fill speed. The thing this matters for is now generating tokens. And why is that? Because when you’re generating tokens, instead of being bottlenecked by doing these super parallelizable matrix multiplies across all your tokens, you’re bottlenecked by how quickly… For a long context with large batch sizes, by how quickly you can read those cache, keys, and values.

(01:01:07)
And so then that’s memory bandwidth, and how can we make this faster? We can try to compress the size of these keys and values. So multi-query attention is the most aggressive of these. Where normally with multi-head attention, you have some number of, quote, unquote, “attention heads” and some number of query heads. Multi-query just preserves the query heads, gets rid of all the key value heads. So there’s only one kind of key value head, and there’s all the remaining query heads. With group query, you instead preserve all the query heads and then your keys and values are… There are fewer heads for the keys and values, but you’re not reducing it to just one. But anyways, the whole point here is you’re just reducing the size of your KV cache.
Arvid
(01:02:00)
And then there is MLA.
Aman
(01:02:02)
Yeah, multi-latent. That’s a little more complicated. And the way that this works is it kind of turns the entirety of your keys and values across all your heads into this one latent vector that has then kind of expanded in for its time.
Sualeh
(01:02:19)
But MLA is from this company called DeepSeek. It’s quite an interesting algorithm. Maybe the key idea is in both MQA and in other places, what you’re doing is you’re reducing the number of KV heads. And the advantage you get from that is there’s less of them, but maybe the theory is that you actually want a lot of different… You want each of the keys and values to actually be different. So one way to reduce the size is you keep one big shared vector for all the keys and values and then you have smaller vectors for every single token. So that you can store the only the smaller thing as some sort of low-rank reduction, and the low-rank reduction, well, that… At the end of the time, when you eventually want to compute the final thing, remember that your memory band, which means that you still have some compute left that you can use for these things. And if you can expand the latent vector back out and somehow this is far more efficient because you’re reducing… For example, maybe you’re reducing vec 32 or something like the size of the vector that you’re keeping.
Aman
(01:03:37)
Yeah, there’s perhaps some richness in having a separate set of keys and values and query that kind of pairwise match up versus compressing that all into one in that interaction at least.
Lex
(01:03:51)
Okay, and all of that is dealing with being memory bound. I mean, ultimately, how does that map to the user experience? Trying to get the-
Aman
(01:04:02)
Yeah. The two things that it maps to is you can now make your cache a lot larger because you’ve less space allocated for the KV cache. You can maybe cache a lot more aggressively in a lot more things, so you get more cache hits, which are helpful for reducing the time to first token for the reasons that were kind of described earlier. And then the second being, when you start doing inference with more and more requests and larger and larger batch sizes, you don’t see much of a slowdown as it’s generating the tokens at the speed of that.
Sualeh
(01:04:31)
Well, it also allows you to make your prompt bigger for certain-
Aman
(01:04:34)
Yeah. Yeah, so the size of your KV cache is both the size of all your prompts multiplied by the number of prompts being processed in parallel. So you could increase either those dimensions, right? The batch size or the size of your prompts without degrading the latency of generating tokens.

Running code in background

Lex
(01:04:51)
Arvid, you wrote a blog post Shadow Workspace: Iterating on Code in the Background. So what’s going on [inaudible 01:04:59]?
Arvid
(01:04:58)
So to be clear, we want there to be a lot of stuff happening in the background, and we’re experimenting with a lot of things. Right now, we don’t have much stuff happening other than the cache warming or figuring out the right context that goes into your command key prompts for example. But the idea is if you can actually spend computation in the background, then you can help the user maybe at a slightly longer time horizon than just predicting the next few lines that you’re going to make. But actually in the next 10 minutes, what are you going to make? And by doing it in background, you can spend more computation doing that. And so the idea of the Shadow Workspace that we implemented, and we use it internally for experiments is that to actually get advantage of doing stuff in the background, you want some kind of feedback signal to give back to the model because otherwise you can get higher performance by just letting the model think for longer, and so o1 is a good example of that.

(01:06:03)
But another way you can improve performance is by letting the model iterate and get feedback. And so one very important piece of feedback when you’re a programmer is the language server, which is this thing, it exists for most different languages, and there’s a separate language server per language. And it can tell you, “You’re using the wrong type here,” and then gives you an error, or it can allow you to go to definition and sort of understands the structure of your code. So language servers are extensions developed by… There is a TypeScript language server developed by the TypeScript people, a Rust language server developed by the Rust people, and then they all interface over the language server protocol to VS Code. So that VS Code doesn’t need to have all of the different languages built into VS Code but rather you can use the existing compiler infrastructure.
Lex
(01:06:52)
For linting purposes, what-
Arvid
(01:06:52)
It’s for linting. It’s for going to definition and for seeing the right types that you’re using.
Lex
(01:06:59)
So it’s doing type checking also.
Arvid
(01:07:01)
Yes, type checking and going to references. And that’s like when you’re working in a big project, you kind of need that. If you don’t have that, it’s really hard to code in a big project.
Lex
(01:07:12)
Can you say, again, how that’s being used inside Cursor, the language server protocol communication thing?
Arvid
(01:07:20)
So it’s being used in Cursor to show to the programmer just like in VS Code, but then the idea is you want to show that same information to the models, the IM models, and you want to do that in a way that doesn’t affect the user because you want to do it in background. And so the idea behind the Shadow Workspace was, okay, one way we can do this is we spawn a separate window of Cursor that’s hidden, and so you can set this flag in it and like turn it’s hidden. There is a window but you don’t actually see it. And inside of this window, the AI agents can modify code however they want as long as they don’t save it because it’s still the same folder and then can get feedback from the linters and go to definition and iterate on their code.
Lex
(01:08:04)
So literally run everything in the background as if… Right, maybe even run the code.
Arvid
(01:08:10)
So that’s the eventual version and that’s what you want. And a lot of the blog post is actually about how do you make that happen because it’s a little bit tricky. You want it to be on the user’s machine so that it exactly mirrors the user’s environment. And then on Linux, you can do this cool thing where you can actually mirror the file system and have the AI make changes to the files, and it thinks that it’s operating on the file level, but actually, that’s stored in memory and you can create this kernel-like extension to make it work. Whereas on Mac and Windows, it’s a little bit more difficult, but it’s a fun technical problem, so that’s why.
Aman
(01:08:57)
One may be hacky but interesting idea that I like is holding a lock on saving. And so basically, you can then have the language model kind of hold the lock on saving to disk and then instead of you operating in the ground truth version of the files that are saved to disk, you actually are operating what was the Shadow Workspace before and these unsaved things that only exist in memory that you still get linter errors for, and you can code in. And then when you try to maybe run code, it’s just like there’s a small warning that there’s a lock, and then you kind of will take back the lock from the language server if you’re trying to do things concurrently or from the Shadow Workspace if you’re trying to do things concurrently.

Debugging

Lex
(01:09:31)
That’s such an exciting future by the way. It’s a bit of a tangent, but to allow a model to change files, it’s scary for people but it’s really cool, to be able to just let the agent do a set of tasks and you come back the next day and kind of observe like it’s a colleague or something like that.
Aman
(01:09:52)
And I think there may be different versions of runability where, for the simple things where you’re doing things in the span of a few minutes on behalf of the user as they’re programming, it makes sense to make something work locally in their machine. I think for the more aggressive things where you’re making larger changes that take longer periods of time, you’ll probably want to do this in some sandbox remote environment and that’s another incredibly tricky problem of how do you exactly reproduce or mostly reproduce to the point of it being effectively equivalent for running code the user’s environment with this remote sandbox.
Sualeh
(01:10:27)
I’m curious what kind of agents you want for coding? Do you want them to find bugs? Do you want them to implement new features? What agents do you want?
Lex
(01:10:36)
So by the way, when I think about agents, I don’t think just about coding. I think so for this particular podcast, there’s video editing and a lot of… If you look in Adobe, a lot… There’s code behind. It’s very poorly documented code, but you can interact with Premiere, for example, using code, and basically all the uploading, everything I do on YouTube, everything as you could probably imagine, I do all of that through code and including translation and overdubbing, all of this. So I envision all of those kinds of tasks. So automating many of the tasks that don’t have to do directly with the editing, so that. Okay, that’s what I was thinking about. But in terms of coding, I would be fundamentally thinking about bug finding, many levels of kind of bug finding and also bug finding like logical bugs, not logical like spiritual bugs or something. Ones like big directions of implementation, that kind of stuff.
Sualeh
(01:11:38)
Magical [inaudible 01:11:39] and bug finding.
Aman
(01:11:40)
Yeah. I mean, it’s really interesting that these models are so bad at bug finding when just naively prompted to find a bug. They’re incredibly poorly calibrated.
Arvid
(01:11:51)
Even the smartest models.
Aman
(01:11:52)
Exactly, even o1.
Lex
(01:11:53)
How do you explain that? Is there a good intuition?
Aman
(01:11:58)
I think these models are really strong reflection of the pre-training distribution, and I do think they generalize as the loss gets lower and lower, but I don’t think the loss and the scale is quite… The loss is low enough such that they’re really fully generalizing on code. The things that we use these things for, the frontier models that they’re quite good at, are really code generation and question answering. And these things exist in massive quantities in pre-training with all of the code in GitHub on the scale of many, many trillions of tokens and questions and answers on things like stack overflow and maybe GitHub issues.

(01:12:39)
And so when you try to push one of these things that really don’t exist very much online, like for example, the Cursor Tab objective of predicting the next edit given the edits done so far, the brittleness kind of shows. And then bug detection is another great example, where there aren’t really that many examples of actually detecting real bugs and then proposing fixes and the models just kind of really struggle at it. But I think it’s a question of transferring the model in the same way that you get this fantastic transfer from pre-trained models just on code in general to the Cursor Tab objective. You’ll see a very, very similar thing with generalized models that are really good at code to bug detection. It just takes a little bit of kind nudging in that direction.
Sualeh
(01:13:25)
Look to be clear, I think they sort of understand code really well. While they’re being pre-trained, the representation that’s being built up almost certainly like somewhere in the stream, the model knows that maybe there’s something sketchy going on. It sort of has some sketchiness but actually eliciting the sketchiness to actually… Part of it is that humans are really calibrated on which bugs are really important. It’s not just actually saying there’s something sketchy. It’s like it’s this sketchy trivial, it’s this sketchy like you’re going to take the server down.

(01:14:04)
Part of it is maybe the cultural knowledge of why is a staff engineer is good because they know that three years ago someone wrote a really sketchy piece of code that took the server down and as opposed to maybe you just… This thing is an experiment. So a few bugs are fine, you’re just trying to experiment and get the feel of the thing. And so if the model gets really annoying when you’re writing an experiment, that’s really bad, but if you’re writing something for super production, you’re writing a database. You’re writing code in Postgres or Linux or whatever. You’re Linus Torvalds. It’s sort of unacceptable to have even an edge case and just having the calibration of how paranoid is the user and like-
Aman
(01:14:51)
But even then if you’re putting in a maximum paranoia, it still just doesn’t quite get it.
Sualeh
(01:14:57)
Yeah, yeah. Yeah.

Dangerous code

Lex
(01:14:58)
I mean, but this is hard for humans too to understand which line of code is important, which is not. I think one of your principles on a website says if a code can do a lot of damage, one should add a comment that say, “This line of code is dangerous.”
Arvid
(01:15:17)
And all caps, repeated 10 times.
Lex
(01:15:20)
No, you say for every single line of code inside the function you have to… And that’s quite profound, that says something about human beings because the engineers move on, even the same person might just forget how it can sink the Titanic a single function. You might not intuit that quite clearly by looking at the single piece of code.
Arvid
(01:15:42)
Yeah. And I think that one is partially also for today’s AI models where if you actually write dangerous, dangerous, dangerous in every single line, the models will pay more attention to that and will be more likely to find bugs in that region.
Lex
(01:16:00)
That’s actually just straight up a really good practice of labeling code of how much damages can do.
Arvid
(01:16:08)
Yeah. I mean, it’s controversial. Some people think it’s ugly. Sualeh does not like it.
Sualeh
(01:16:14)
Well, I think it’s… In fact, I actually think this is one of the things I learned from Arvid is sort of aesthetically I don’t like it, but I think there’s certainly something where it’s useful for the models and humans just forget a lot, and it’s really easy to make a small mistake and cause… Just bring down the server. Of course, we test a lot and whatever, but there’s always these things that you have to be very careful.
Aman
(01:16:42)
Yeah, like with just normal docstrings, I think people will often just skim it when making a change and think, “Oh, I know how to do this,” and you really need to point it out to them so that doesn’t slip through.
Lex
(01:16:55)
Yeah. You have to be reminded that you could do a lot of damage that’s like we don’t really think about that. You think about, “Okay, how do I figure out how this works so I can improve it?” You don’t think about the other direction that it could-
Arvid
(01:17:09)
Until we have formal verification for everything, then you can do whatever you want and you know for certain that you have not introduced a bug if the proof pass.
Aman
(01:17:18)
Well, concretely, what do you think that future would look like?
Arvid
(01:17:22)
I think people will just not write to tests anymore, and the model will suggest… You write a function, the model will suggest a spec, and you review the spec. And in the meantime, smart reasoning model computes a proof that the implementation follows the spec, and I think that happens for most functions.
Michael
(01:17:44)
Do you think this gets at a little bit some of the stuff you were talking about earlier with the difficulty of specifying intent for what you want with software, where sometimes it might be because the intent is really hard to specify, it’s also then going to be really hard to prove that it’s actually matching whatever your intent is?
Arvid
(01:17:58)
You think that spec is hard to generate?
Michael
(01:18:01)
Yeah, or just for a given spec, maybe you can… I think there is a question of, can you actually do the formal verification? Is that possible? I think that there’s more to dig into there, but then also-
Arvid
(01:18:15)
Even if you have the spec?
Sualeh
(01:18:16)
If you have the spec-
Michael
(01:18:19)
Even if you have the spec, is the spec written in natural language? Or is it-
Arvid
(01:18:21)
No, [inaudible 01:18:21] the spec would be formal.
Aman
(01:18:24)
But how easier would that be [inaudible 01:18:26]?
Michael
(01:18:27)
Okay. So then I think that you care about things that are not going to be easily well specified in the spec language.
Arvid
(01:18:30)
I see, I see.
Michael
(01:18:31)
Would be maybe an argument against formal verification is all you need.
Aman
(01:18:36)
The worry is there’s this massive document-
Michael
(01:18:39)
[inaudible 01:18:39] replacing something like unit tests, sure.
Arvid
(01:18:41)
Yeah, yeah. I think you can probably also evolve the spec languages to capture some of the things that they don’t really capture right now. I don’t know. I think it’s very exciting.
Lex
(01:18:54)
And you’re speaking not just about single functions, you’re speaking about entire code bases.
Arvid
(01:19:00)
I think entire code bases is harder, but that is what I would love to have and I think it should be possible. And because you can even… There’s a lot of work recently where you can prove formally verified down to the hardware, so through the… You formally verify the C code and then you formally verify through the GCC compiler and then through the Verilog down to the hardware. And that’s incredibly big system, but it actually works. And I think big code bases are sort of similar in that and they’re like multi-layered system. And if you can decompose it and formally verify each part, then I think it should be possible. I think this specification problem is a real problem, but…
Aman
(01:19:39)
How do you handle side effects or how do you handle, I guess, external dependencies like calling the Stripe API?
Sualeh
(01:19:46)
Maybe Stripe would write a spec for their API.
Aman
(01:19:49)
But you can’t do this for everything. Can you do this for everything you use? How do you do it for… If there’s a language… Maybe people will use language models as primitives in the programs they write, and there’s a dependence on it and how do you now include that?
Arvid
(01:20:02)
I think you might be able to prove that still.
Aman
(01:20:05)
Prove what about language models?
Arvid
(01:20:07)
I think it feels possible that you could actually prove that a language model is aligned for example, or you can prove that it actually gives the right answer.
Sualeh
(01:20:20)
That’s the dream.
Lex
(01:20:21)
Yeah, that is… I mean, if it’s possible. That’s your I have a dream speech. If it’s possible, that will certainly help with making sure your code doesn’t have bugs and making sure AI doesn’t destroy all human civilization. So the full spectrum of AI safety to just bug finding. So you said the models struggle with bug finding. What’s the hope?
Sualeh
(01:20:44)
My hope initially is, and I can let Michael chime in too, but it was like it should first help with the stupid bugs. It should query quickly, catch the stupid bugs off by one error is like… Sometimes you write something in a comment and do the other way. It’s very common. I do this. I write less than in a comment and I maybe write the greater than or something like that. And the model is like, “Yeah, you looks sketchy. You sure you want to do that?” But eventually, it should be able to catch harder bugs too.
Michael
(01:21:16)
Yeah. And I think that it’s also important to note that this is… Having good bug, finding models feels necessary to get to the highest reaches of having AI do more and more programming for you, where you’re going to… If AI is building more and more of the system for you, you need to not just generate but also verify. And without that, some of the problems that we’ve talked about before with programming, with these models will just become untenable. So it’s not just for humans like you write a bug, I write a bug, find the bug for me, but it’s also being able to verify the AI’s code and check it is really important.
Arvid
(01:21:52)
Yeah. And then how do you actually do this? We have had a lot of contentious dinner discussions of how do you actually train a bug model, but one very popular idea is it’s kind of potentially easy to introduce a bug than actually finding the bug. And so you can train a model to introduce bugs in existing code and then you can train a reverse bug model then that can find bugs using this synthetic data. So that’s one example, but there are lots of ideas for how to [inaudible 01:22:22].
Michael
(01:22:23)
You can also do a bunch of work not even at the model level of taking the biggest models and then maybe giving them access to a lot of information that’s not just the code. It’s kind of a hard problem to stare at a file and be like, “Where’s the bug?” And that’s hard for humans often, right? And so often you have to run the code and being able to see things like traces and step through a debugger, there’s another whole other direction where it tends toward that.

(01:22:46)
It could also be that there are two different product form factors here. It could be that you have a really specialty model that’s quite fast that’s running in the background and trying to spot bugs. And it might be that sometimes sort of to Arvid’s earlier example about some nefarious input box bug. It might be that sometimes you want to like… You know there’s a bug, you’re not just checking hypothesis free, you’re like, “This is a problem, I really want to solve it,” and you zap that with tons and tons and tons of compute, and you’re willing to put in $50 to solve that bug or something even more.
Lex
(01:23:12)
Have you thought about integrating money into this whole thing? I would pay probably a large amount of money if you found a bug or even generated code that I really appreciated. I had a moment a few days ago when I started using Cursor where it generated perfect three functions for interacting with the YouTube API to update captions for localization in different languages. The API documentation is not very good and the code across, if I… I googled it for a while. I couldn’t find exactly, there’s a lot of confusing information, and Cursor generated perfectly.

(01:23:53)
I just sit back, I read the code, I was like, “This is correct. I tested it, it’s correct.” I was like, “I want to tip.” I want a button that goes, “Here’s $5.” One that’s really good just to support the company and support what the interface is. And the other is that probably sends a strong signal like good job. So there’s this much stronger signal than just accepting the code. You just actually send a strong good job. That and for bug finding, obviously, there’s a lot of people that would pay a huge amount of money for a bug bounty thing, right? You guys think about that?
Arvid
(01:24:33)
Yeah, it’s a controversial idea inside the company. I think it sort of depends on how much you believe in humanity almost. I think it would be really cool if you spend nothing to try to find a bug. And if it doesn’t find a bug, you spend $0. And then if it does find a bug and you click accept, then it also shows in parentheses like $1. And so you spend $1 to accept the bug. And then of course, there’s a worry like okay, “We spent a lot of computation, maybe people will just copy paste.” I think that’s a worry. Then there is also the worry that introducing money into the product makes it… It doesn’t feel as fun anymore. You have to think about money. And all you want to think about is the code, and so maybe it actually makes more sense to separate it out, and you pay some fee every month, and then you get all of these things for free.
Lex
(01:25:29)
But there could be a tipping component which is not like it cost this-
Arvid
(01:25:32)
Yes, but it still has that dollar symbol. I think it’s fine, but I also see the point where maybe you don’t want to introduce it.
Aman
(01:25:40)
Yeah, I was going to say the moment that feels like people do this is when they share it. When they have this fantastic example, they just share it with their friends.
Michael
(01:25:46)
There is also a potential world where there’s a technical solution to this like honor system problem too, where if we can get to a place where we understand the output of the system more, I mean, to the stuff we were talking about with error checking with the LSP and then also running the code. But if you could get to a place where you could actually somehow verify, “Oh, I have fixed the bug,” maybe then the bounty system doesn’t need to rely on the honor system too.

Branching file systems

Lex
(01:26:09)
How much interaction is there between the terminal and the code? How much information is gained from if you run the code in the terminal? Can you do a loop where it runs the code and suggests how to change the code? If the code and runtime gets an error? Is right now there’s separate worlds completely? I know you can do control K inside the terminal to help you write the code.
Aman
(01:26:35)
You can use terminal context as well inside of check command K kind of everything. We don’t have the looping part yet, so we suspect something like this could make a lot of sense. There’s a question of whether it happens in the foreground too or if it happens in the background like what we’ve been discussing.
Lex
(01:26:52)
Sure. The background’s pretty cool. I could be running the code in different ways. Plus there’s a database side to this, which how do you protect it from not modifying the database, but okay.
Sualeh
(01:27:03)
I mean, there’s certainly cool solutions there. There’s this new API that is being developed for… It’s not in AWS, but it certainly… I think it’s in PlanetScale. I don’t know if PlanetScale was the first one to you add it. It’s this ability sort of add branches to a database, which is like if you’re working on a feature and you want to test against the broad database, but you don’t actually want to test against the broad database, you could sort of add a branch to the database. And the way they do that is they add a branch to the write-ahead log. And there’s obviously a lot of technical complexity in doing it correctly. I guess database companies need new things to do. They have good databases now. And I think turbopuffer, which is one of the databases we use, is going to add maybe branching to the write-ahead log. So maybe the AI agents will use branching, they’ll test against some branch, and it’s sort of going to be a requirement for the database to support branching or something.
Aman
(01:28:10)
It would be really interesting if you could branch a file system, right?
Sualeh
(01:28:13)
Yeah. I feel like everything needs branching. It’s like-
Aman
(01:28:13)
Yeah.
Lex
(01:28:17)
Yeah. The problem with the multiverse, right? If you branch on everything that’s like a lot.
Sualeh
(01:28:24)
There’s obviously these super clever algorithms to make sure that you don’t actually use a lot of space or CPU or whatever.
Lex
(01:28:32)
Okay. This is a good place to ask about infrastructure. So you guys mostly use AWS, what are some interesting details? What are some interesting challenges? Why’d you choose AWS? Why is AWS still winning? Hashtag.
Arvid
(01:28:45)
AWS is just really, really good. It is really good. Whenever you use an AWS product, you just know that it’s going to work. It might be absolute hell to go through the steps to set it up.
Lex
(01:29:02)
Why is the interface so horrible?
Sualeh
(01:29:04)
Because it’s-
Arvid
(01:29:05)
It’s just so good. It doesn’t need to-
Lex
(01:29:06)
It’s the nature of winning.
Sualeh
(01:29:09)
I think it’s exactly. It’s just nature they’re winning.
Arvid
(01:29:11)
Yeah, yeah. But AWS we can always trust, it will always work. And if there is a problem, it’s probably your problem. Yeah.

Scaling challenges

Lex
(01:29:20)
Okay. Is there some interesting challenges to… You guys are pretty new startup to scaling, to so many people and-
Michael
(01:29:29)
Yeah, I think that it has been an interesting journey adding each extra zero to the request per second. You run into all of these with the general components you’re using for caching and databases, run into issues as you make things bigger and bigger, and now we’re at the scale where we get into overflows on our tables and things like that. And then also there have been some custom systems that we’ve built. For instance, our retrieval system for computing, a semantic index of your code base and answering questions about a code base that have, continually, I feel like been one of the trickier things to scale.
Michael
(01:30:00)
… that have continually, I feel like, been one of the trickier things to scale.
Sualeh
(01:30:04)
I have a few friends who are super senior engineers and one of their lines is, it’s very hard to predict where systems will break when you scale them. You can try to predict in advance, but there’s always something weird that’s going to happen when you add these extras here. You thought through everything, which you didn’t actually think through everything. But I think for that particular system, we’ve… So for concrete details, the thing we do is obviously we upload when… We chunk up all of your code, and then we send up the code for embedding and we embed the code. And then we store the embeddings in a database, but we don’t actually store any of the code. And then there’s reasons around making sure that we don’t introduce client bugs because we’re very, very paranoid about client bugs. We store much of the details on the server. Everything is encrypted.

(01:31:08)
So one of the technical challenges is always making sure that the local index, the local code base state is the same as the state that is on the server. The way, technically, we ended up doing that is, for every single file you can keep this hash, and then for every folder you can keep a hash, which is the hash of all of its children. You can recursively do that until the top. Why do something complicated? One thing you could do is you could keep a hash for every file and every minute, you could try to download the hashes that are on the server, figure out what are the files that don’t exist on the server. Maybe you just created a new file, maybe you just deleted a file, maybe you checked out a new branch, and try to reconcile the state between the client and the server.

(01:31:57)
But that introduces absolutely ginormous network overhead both on the client side. Nobody really wants us to hammer their WiFi all the time if you’re using Cursor. But also, it would introduce ginormous overhead on the database. It would be reading these tens of terabytes database, approaching 20 terabytes or something data base every second. That’s just crazy. You definitely don’t want to do that. So what you do, you just try to reconcile the single hash, which is at the root of the project. And then if something mismatches, then you go, you find where all the things disagree. Maybe you look at the children and see if the hashes match. If the hashes don’t match, go look at their children and so on. But you only do that in the scenario where things don’t match. For most people, most of the time, the hashes match.
Lex
(01:32:50)
So it’s like a hierarchical reconciliation-
Sualeh
(01:32:53)
Yeah.
Lex
(01:32:53)
… of hashes-
Sualeh
(01:32:53)
Something like that.
Aman
(01:32:54)
Yeah, it’s called a Merkle tree.
Lex
(01:32:56)
Yeah, Merkle. Yeah. Yeah, this is cool to see that you have to think through all these problems.
Sualeh
(01:33:03)
The reason it’s gotten hard is just because the number of people using it and some of your customers have really, really large code bases to the point where… We originally reordered dark code base, which is big, but it’s just not the size of some company that’s been there for 20 years and has a ginormous number of files and you want to scale that across programmers. There’s all these details where building the simple thing is easy, but scaling it to a lot of people, a lot of companies is obviously a difficult problem, which is independent of, actually… so that there’s part of this scaling. Our current solution is also coming up with new ideas that, obviously, we’re working on, but then scaling all of that in the last few weeks, months.
Aman
(01:33:48)
Yeah. There are a lot of clever things, additional things that go into this indexing system. For example, the bottleneck in terms of costs is not soaring things in the vector database or the database. It’s actually embedding the code. You don’t want to re-embed the code base for every single person in a company that is using the same exact code except for maybe they’re a different branch with a few different files or they’ve made a few local changes. Because again, embeddings are the bottleneck, you can do this one clever trick and not have to worry about the complexity of dealing with branches and the other databases where you just have some cash on the actual vectors computed from the hash of a given chunk. So this means that when the nth person at a company goes and embed their code base, it’s really, really fast. You do all this without actually storing any code on our servers at all. No code data is stored. We just store the vectors in the vector database and the vector cache.
Lex
(01:34:45)
What’s the biggest gains at this time you get from indexing the code base? Just out of curiosity, what benefit do users have? It seems like longer term, there’ll be more and more benefit, but in the short term, just asking questions of the code base, what’s the usefulness of that?
Arvid
(01:35:06)
I think the most obvious one is just, you want to find out where something is happening in your large code base, and you have a fuzzy memory of, “Okay, I want to find the place where we do X,” but you don’t exactly know what to search for in a normal text search. So you ask a chat, you hit command enter to ask with the code base chat. And then very often, it finds the right place that you were thinking of.
Aman
(01:35:33)
Like you mentioned, in the future, I think there’s only going to get more and more powerful, where we’re working a lot on improving the quality of our retrieval. I think the ceiling for that is really, really much higher than people give the credit for.
Lex
(01:35:46)
One question that’s good to ask here, have you considered and why haven’t you much done local stuff to where you can do the… It seems like everything was just discussed as exceptionally difficult to do. To go to the cloud, you have to think about all these things with the caching and the large code base where a large number of programmers are using the same code base. You have to figure out the puzzle of that. A lot of it, most software just does this heavy computational stuff locally. So have you considered doing embeddings locally?
Arvid
(01:36:18)
Yeah, we thought about it, and I think it would be cool to do it locally. I think it’s just really hard. One thing to keep in mind is that some of our users use the latest MacBook Pro, but most of our users, more than 80% of our users are in Windows machines, which many of them are not very powerful. So local models really only works on the latest computers, and it’s also a big overhead to build that in. So even if we would like to do that, it’s currently not something that we are able to focus on. I think there are some people that do that, and I think that’s great, but especially as models get bigger and bigger and you want to do fancier things with bigger models, it becomes even harder to do it locally.
Sualeh
(01:37:07)
Yeah. It’s not a problem of weaker computers. It’s just that for example, if you’re some big company, you have big company code base. It’s just really hard to process big company code base even on the beefiest MacBook Pros. It’s not even a matter of if you’re just a student or something. I think if you’re the best programmer at a big company, you’re still going to have a horrible experience. If you do everything locally where you could do it and scrape by, but again, it wouldn’t be fun anymore.
Aman
(01:37:40)
Yeah. Like at approximate nearest neighbors and this massive code base is going to just eat up your memory and your CPU, and it’s based off of that. That’s just that. Let’s talk about also the modeling side where, as Arvid said, there are these massive headwinds against local models where one, things that seem to move towards MOEs, which one benefit is maybe their more memory bandwidth bound, which plays in favor of local versus using GPUs or using Nvidia GPUs. But the downside is, these models are just bigger in total, and they’re going to need to fit, often not even on a single node but multiple nodes. There’s no way that’s going to fit inside of even really good MacBooks. I think especially for coding, it’s not a question as much of, does it clear some bar of the model’s good enough to do these things and then we’re satisfied? Which may be the case for other problems and maybe where local models shine, but people are always going to want the best, the most intelligent, the most capable things, and that’s going to be really, really hard to run for almost all people, locally.
Sualeh
(01:38:51)
Don’t you want the most capable model? You want [inaudible 01:38:55] too?
Aman
(01:38:56)
And also o1-
Lex
(01:38:58)
I like how you’re pitching me.
Aman
(01:39:00)
o1 is another-
Lex
(01:39:00)
Would you be satisfied with an inferior model? Listen, yes, I’m one of those, but there’s some people that like to do stuff locally, especially like… Really, there’s a whole obviously open source movement that resists. It’s good that they exist actually because you want to resist the power centers that are growing our-
Arvid
(01:39:20)
There’s actually an alternative to local models that I am particularly fond of. I think it’s still very much in the research stage, but you could imagine to do homomorphic encryption for language model inference. So you encrypt your input on your local machine, then you send that up, and then the server can use loss of computation. They can run models that you cannot run locally on this encrypted data, but they cannot see what the data is, and then they send back the answer and you decrypt the answer and only you can see the answer. So I think that’s still very much research and all of it is about trying to make the overhead lower because right now, the overhead is really big, but if you can make that happen, I think that would be really, really cool, and I think it would be really, really impactful because I think one thing that’s actually worrisome is that, as these models get better and better, they’re going to become more and more economically useful.

(01:40:18)
So more and more of the world’s information and data will flow through one or two centralized actors. And then there are worries about, there can be traditional hacker attempts, but it also creates this scary part where if all of the world’s information is flowing through one node in plaintext, you can have surveillance in very bad ways. Sometimes that will happen for… Initially, will be good reasons. People will want to try to protect against bad actors using AI models in bad ways, and then you will add in some surveillance code. And then someone else will come in and you’re on a slippery slope, and then you start doing bad things with a lot of the world’s data. So I am very hopeful that we can solve homomorphic encryption for-
Lex
(01:41:11)
Yeah, and-
Arvid
(01:41:12)
… language model inference.
Lex
(01:41:12)
… doing privacy, preserving machine learning. But I would say, that’s the challenge we have with all software these days. It’s like there’s so many features that can be provided from the cloud and all us increasingly rely on it and make our life awesome. But there’s downsides, and that’s why you rely on really good security to protect from basic attacks. But there’s also only a small set of companies that are controlling that data, and they obviously have leverage and they could be infiltrated in all kinds of ways. That’s the world we live in. So it’s-
Sualeh
(01:41:43)
Yeah, the thing I’m just actually quite worried about is the world where… Anthropic has this responsible scaling policy where we’re the low ASLs, which is the Anthropic security level or whatever of the models. But as we get to ASL-3, ASL-4, whatever models which are very powerful… But for mostly reasonable security reasons, you would want to monitor all the prompts. But I think that’s reasonable and understandable where everyone is coming from. But man, it’d be really horrible if all the world’s information is monitored that heavily, it’s way too centralized. It’s like this really fine line you’re walking where on the one side, you don’t want the models to go rogue. On the other side, humans like… I don’t know if I trust all the world’s information to pass through three model providers.
Aman
(01:42:44)
Why do you think it’s different than cloud providers?
Arvid
(01:42:47)
Because I think a lot of this data would never have gone to the cloud providers in the first place where this is often… You want to give more data to the AI models, you want to give personal data that you would never have put online in the first place to these companies or to these models. It also centralizes control where right now, for cloud, you can often use your own encryption keys, and AWS can’t really do much. But here, it’s just centralized actors that see the exact plain text of everything.

Context

Lex
(01:43:31)
Yeah. On the topic of a context, that’s actually been a friction for me. When I’m writing code in Python, there’s a bunch of stuff imported. You could probably intuit the kind of stuff I would like to include in the context. How hard is it to auto figure out the context?
Michael
(01:43:51)
It’s tricky. I think we can do a lot better at computing the context automatically in the future. One thing that’s important to note is, there are trade-offs with including automatic context. So the more context you include for these models, first of all, the slower they are and the more expensive those requests are, which means you can then do less model calls and do less fancy stuff in the background. Also, for a lot of these models, they get confused if you have a lot of information in the prompt. So the bar for accuracy and for relevance of the context you include should be quite high. Already, we do some automatic context in some places within the product. It’s definitely something we want to get a lot better at. I think that there are a lot of cool ideas to try there, both on the learning better retrieval systems, like better embedding models, better rerankers.

(01:44:48)
I think that there are also cool academic ideas, stuff we’ve tried out internally, but also the field is grappling with writ large about, can you get language models to a place where you can actually just have the model itself understand a new corpus of information? The most popular talked about version of this is can you make the context windows infinite? Then if you make the context windows infinite, can you make the model actually pay attention to the infinite context? And then after you can make it pay attention to the infinite context to make it somewhat feasible to actually do it, can you then do caching for that infinite context? You don’t have to recompute that all the time. But there are other cool ideas that are being tried, that are a little bit more analogous to fine-tuning of actually learning this information in the weights of the model. It might be that you actually get a qualitative lead different type of understanding if you do it more at the weight level than if you do it at the in-context learning level.

(01:45:37)
I think the jury’s still a little bit out on how this is all going to work in the end? But in the interim, us as a company, we are really excited about better retrieval systems and picking the parts of the code base that are most relevant to what you’re doing, and we could do that a lot better.
Aman
(01:45:52)
One interesting proof of concept for the learning this knowledge directly in the weights is with VS Code. So we’re in a VS Code fork and VS Code. The code is all public. So these models in pre-training have seen all the code. They’ve probably also seen questions and answers about it. And then they’ve been fine-tuned and RLHFed to be able to answer questions about code in general. So when you ask it a question about VS Code, sometimes it’ll hallucinate, but sometimes it actually does a pretty good job at answering the question. I think this is just by… It happens to be okay, but what if you could actually specifically train or post-train a model such that it really was built to understand this code base?

(01:46:38)
It’s an open research question, one that we’re quite interested in. And then there’s also uncertainty of, do you want the model to be the thing that end-to-end is doing everything, i.e. it’s doing the retrieval in its internals and then answering a question, creating the code, or do you want to separate the retrieval from the frontier model, where maybe you’ll get some really capable models that are much better than the best open source ones in a handful of months? And then you’ll want to separately train a really good open source model to be the retriever, to be the thing that feeds in the context to these larger models.
Lex
(01:47:14)
Can you speak a little more to post-training a model to understand the code base? What do you mean by that? Is this a synthetic data direction? Is this-
Aman
(01:47:23)
Yeah, there are many possible ways you could try doing it. There’s certainly no shortage of ideas. It’s just a question of going in and trying all of them and being empirical about which one works best. One very naive thing is to try to replicate what’s done with VS Code and these frontier models. So let’s continue pre-training. Some kind of continued pre-training that includes general code data but also throws in of the data of some particular repository that you care about. And then in post-training, meaning in… Let’s just start with instruction fine-tuning. You have a normal instruction fine-tuning data set about code. Then you throw in a lot of questions about code in that repository.

(01:48:07)
So you could either get ground truth ones, which might be difficult or you could do what you hinted at or suggested using synthetic data, i.e. having the model ask questions about various recent pieces of the code. So you take the pieces of the code, then prompt the model or have a model propose a question for that piece of code, and then add those as instruction fine-tuning data points. And then in theory, this might unlock the model’s ability to answer questions about that code base.

OpenAI o1

Lex
(01:48:39)
Let me ask you about OpenAI o1. What do you think is the role of that kind of test time compute system in programming?
Aman
(01:48:47)
I think test time compute is really, really interesting. So there’s been the pre-training regime which will, as you scale up the amount of data and the size of your model, get you better and better performance both on loss and then on downstream benchmarks and just general performance. So we use it for coding or other tasks. We’re starting to hit a bit of a data wall. Meaning, it’s going to be hard to continue scaling up this regime. So scaling up test time compute is an interesting way, if now increasing the number of inference time flops that we use but still getting… Yeah, as you increase the number of flops you use inference time getting corresponding improvements in the performance of these models. Traditionally, we just had to literally train a bigger model that always used that many more flops, but now, we could perhaps use the same size model and run it for longer to be able to get an answer at the quality of a much larger model.

(01:49:46)
So the really interesting thing I like about this is there are some problems that perhaps require 100 trillion parameter model intelligence trained on 100 trillion tokens. But that’s maybe 1%, maybe 0.1% of all queries. So are you going to spend all of this effort, all of this compute training a model that costs that much and then run it so infrequently? It feels completely wasteful when instead you get the model that can… You train the model that is capable of doing the 99.9% of queries, then you have a way of inference time running it longer for those few people that really, really want max intelligence.
Lex
(01:50:28)
How do you figure out which problem requires what level of intelligence? Is that possible to dynamically figure out when to use GPT-4, when to use a small model and when you need the o1?
Aman
(01:50:44)
Yeah, that’s an open research problem, certainly. I don’t think anyone’s actually cracked this model routing problem quite well. We have initial implementations of this for something like Cursor Tab, but at the level of going between 4o sonnet to o1, it’s a bit trickier. There’s also a question like, what level of intelligence do you need to determine if the thing is too hard for the four level model? Maybe you need the o1 level model. It’s really unclear.
Lex
(01:51:19)
But you mentioned this. So there’s a pre-training process then there’s post-training, and then there’s test time compute. Is that fair to separate? Where’s the biggest gains?
Aman
(01:51:31)
Well, it’s weird because test time compute, there’s a whole training strategy needed to get test time compute to work. The other really weird thing about this is outside of the big labs and maybe even just OpenAI, no one really knows how it works. There’ve been some really interesting papers that show hints of what they might be doing. So perhaps they’re doing something with tree search using process reward models. But yeah, I think the issue is we don’t quite know exactly what it looks like, so it would be hard to comment on where it fits in. I would put it in post-training, but maybe the compute spent for this kind of… forgetting test time compute to work for a model is going to dwarf pre-training eventually.
Lex
(01:52:18)
So we don’t even know if o1 is using just chain of thought or we don’t know how they’re using any of these? We don’t know anything?
Aman
(01:52:27)
It’s fun to speculate.
Lex
(01:52:30)
If you were to build a competing model, what would you do?
Aman
(01:52:35)
Yeah. So one thing to do would be, I think you probably need to train a process reward model, which is… So maybe we can get into reward models and outcome reward models versus process reward models. Outcome reward models are the traditional reward models that people are trained for language modeling, and it’s just looking at the final thing. So if you’re doing some math problem, let’s look at that final thing. You’ve done everything and let’s assign a grade to it, how likely we think… What’s the reward for this outcome? Process reward models instead try to grade the chain of thought. So OpenAI had preliminary paper on this, I think, last summer where they use human labelers to get this pretty large several hundred thousand data set of creating chains of thought. Ultimately, it feels like I haven’t seen anything interesting in the ways that people use process reward models outside of just using it as a means of affecting how we choose between a bunch of samples.

(01:53:36)
So what people do in all these papers is they sample a bunch of outputs from the language model, and then use the process reward models to grade all those generations alongside maybe some other heuristics and then use that to choose the best answer. The really interesting thing that people think might work and people want to work is tree search with these process reward models. Because if you really can grade every single step of the chain of thought, then you can branch out and explore multiple paths of this chain of thought and then use these process reward models to evaluate how good is this branch that you’re taking.
Lex
(01:54:14)
Yeah. When the quality of the branch is somehow strongly correlated with the quality of the outcome at the very end, so you have a good model of knowing which branch to take. So not just in the short term, in the long term?
Aman
(01:54:26)
Yeah. The interesting work that I think has been done is figuring out how to properly train the process… Or the interesting work that has been open sourced and people I think talk about is how to train the process reward models, maybe in a more automated way. I could be wrong here, could not be mentioning some papers. I haven’t seen anything super that seems to work really well for using the process reward models creatively to do tree search and code.
Lex
(01:54:52)
This is an AI safety, maybe a bit of a philosophy question. So OpenAI says that they’re hiding the chain of thought from the user, and they’ve said that that was a difficult decision to make. Instead of showing the chain of thought, they’re asking the model to summarize the chain of thought. They’re also in the background saying they’re going to monitor the chain of thought to make sure the model is not trying to manipulate the user, which is a fascinating possibility. But anyway, what do you think about hiding the chain of thought?
Michael
(01:55:21)
One consideration for OpenAI, and this is completely speculative, could be that they want to make it hard for people to distill these capabilities out of their model. It might actually be easier if you had access to that hidden chain of thought to replicate the technology, because pretty important data, like seeing the steps that the model took to get to the final results.
Lex
(01:55:39)
So you could probably train on that also?
Michael
(01:55:42)
And there was a mirror situation with this, with some of the large language model providers, and also this is speculation, but some of these APIs used to offer easy access to log probabilities for all the tokens that they’re generating and also log probabilities over the prompt tokens. And then some of these APIs took those away. Again, complete speculation, but one of the thoughts is that the reason those were taken away is if you have access to log probabilities similar to this hidden chain of thought, that can give you even more information to try and distill these capabilities out of the APIs, out of these biggest models and to models you control. As an asterisk on also the previous discussion about us integrating o1, I think that we’re still learning how to use this model. So we made o1 available in Cursor because when we got the model, we were really interested in trying it out. I think a lot of programmers are going to be interested in trying it out.

(01:56:40)
o1 is not part of the default Cursor experience in any way up, and we still haven’t found a way to yet integrate it into the editor in a way that we reach for every hour, maybe even every day. So I think that the jury’s still out on how to use the model, and we haven’t seen examples yet of people releasing things where it seems really clear like, oh, that’s now the use case. The obvious one to turn to is maybe this can make it easier for you to have these background things running, to have these models and loops, to have these models be agentic. But we’re still discovering,
Sualeh
(01:57:22)
To be clear, we have ideas. We just need to try and get something incredibly useful before we put it out there.
Aman
(01:57:29)
But it has these significant limitations. Even barring capabilities, it does not stream. That means it’s really, really painful to use for things where you want to supervise the output. Instead, you’re just waiting for the wall text to show up. Also, it does feel like the early innings of test time, compute and search where it’s just a very, very much a v0, and there’s so many things that don’t feel quite right. I suspect in parallel to people increasing the amount of pre-training data and the size of the models and pre-training and finding tricks there, you’ll now have this other thread of getting search to work better and better.
Lex
(01:58:13)
So let me ask you about strawberry tomorrow eyes. So it looks like GitHub Copilot might be integrating o1 in some kind of way, and I think some of the comments are saying, does this mean Cursor is done? I think I saw one comment saying that.
Arvid
(01:58:35)
It’s a time to shut down Cursor. Yeah.
Lex
(01:58:36)
Time to shut down Cursor.
Arvid
(01:58:38)
[inaudible 01:58:38].
Lex
(01:58:39)
Thank you. So is it time to shut down Cursor?
Michael
(01:58:41)
I think this space is a little bit different from past software spaces over the 2010s, where I think that the ceiling here is really, really, really incredibly high. So I think that the best product in three to four years will just be soon much more useful than the best product today. You can wax poetic about moats this and brand that and this is our advantage, but I think in the end, just if you stop innovating on the product, you will lose. That’s also great for startups, that’s great for people trying to enter this market because it means you have an opportunity to win against people who have lots of users already by just building something better. So I think over the next few years, it’s just about building the best product, building the best system. That both comes down to the modeling engine side of things, and it also comes down to the editing experience.
Aman
(01:59:37)
Yeah, I think most of the additional value from Cursor versus everything else out there is not just integrating the new model fast like o1. It comes from all of the depth that goes into these custom models that you don’t realize are working for you in every facet of the product, as well as the really thoughtful UX with every single feature.

Synthetic data

Lex
(02:00:01)
All right. From that profound answer-
Lex
(02:00:01)
All right, from that profound answer, let’s descend back down to the technical. You mentioned you have a taxonomy of synthetic data.
Aman
(02:00:08)
Oh yeah.
Lex
(02:00:09)
Can you please explain?
Aman
(02:00:10)
Yeah, I think there are three main kinds of synthetic data. So what is synthetic data, first? So there’s normal data, like non-synthetic data, which is just data that’s naturally created, i.e. usually it’ll be from humans having done things. So from some human process you get this data. Synthetic data, the first one would be distillation. So having a language model, output tokens or probability distributions over tokens, and then you can train some less capable model on this.

(02:00:45)
This approach is not going to get you a more capable model than the original one that has produced the tokens, but it’s really useful for if there’s some capability you want to elicit from some really expensive high-latency model. You can then distill that down into some smaller task-specific model.

(02:01:04)
The second kind is when one direction of the problem is easier than the reverse. So a great example of this is bug detection, like we mentioned earlier, where it’s a lot easier to introduce reasonable-looking bugs than it is to actually detect them. And this is probably the case for humans too. And so what you can do, is you can get a model that’s not trained in that much data, that’s not that smart, to introduce a bunch of bugs and code. And then you can use that to then train… Use the synthetic data to train a model that can be really good at detecting bugs.

(02:01:42)
The last category I think is, I guess the main one that it feels like the big labs are doing for synthetic data, which is producing text with language models that can then be verified easily. So extreme example of this is if you have a verification system that can detect if language is Shakespeare level, and then you have a bunch of monkeys typing and typewriters. You can eventually get enough training data to train a Shakespeare-level language model.

(02:02:12)
And I mean this is very much the case for math where verification is actually really, really easy for formal languages. And then what you can do, is you can have an okay model, generate a ton of rollouts, and then choose the ones that you know have actually proved the ground truth theorems, and train that further.

(02:02:34)
There’s similar things you can do for code with lead code like problems, where if you have some set of tests that you know correspond to if something passes these tests, it actually solved problem. You could do the same thing where you verify that it’s passed the test and then train the model in the outputs that have passed the tests.

(02:02:51)
I think it’s going to be a little tricky getting this to work in all domains, or just in general. Having the perfect verifier feels really, really hard to do with just open-ended miscellaneous tasks. You give the model or more long horizon tasks, even in coding.
Lex
(02:03:09)
That’s because you’re not as optimistic as Arvid. But yeah, so yeah, that third category requires having a verifier.
Aman
(02:03:17)
Verification, it feels like it’s best when you know for a fact that it’s correct. And then it wouldn’t be like using a language model to verify. It would be using tests or formal systems.
Michael
(02:03:28)
Or running the thing too. Doing the human form of verification, where you just do manual quality control.
Aman
(02:03:34)
Yeah.
Michael
(02:03:34)
But the language model version of that, where it’s running the thing and it actually understands the output.
Aman
(02:03:39)
Yeah. No, that’s-
Michael
(02:03:41)
I’m sure it’s somewhere in between.
Aman
(02:03:41)
Yeah. I think that’s the category that is most likely to result in massive gains.

RLHF vs RLAIF

Lex
(02:03:48)
What about RL with feedback side RLHF versus RLAIF? What’s the role of that in getting better performance on the models?
Aman
(02:04:00)
Yeah. So RLHF is when the reward model you use is trained from some labels you’ve collected from humans giving feedback. I think this works if you have the ability to get a ton of human feedback for this kind of task that you care about.

(02:04:21)
RLAIF is interesting because you’re depending on… This is actually, it’s depending on the constraint that verification is actually a decent bit easier than generation. Because it feels like, okay, what are you doing? Are you using this language model to look at the language model outputs and then prove the language model? But no, it actually may work if the language model has a much easier time verifying some solution than it does generating it. Then you actually could perhaps get this kind of recursive loop. But I don’t think it’s going to look exactly like that.

(02:04:56)
The other thing you could do, that we kind of do, is a little bit of a mix of RLAIF and RLHF, where usually the model is actually quite correct and this is the case of precursor tap picking between two possible generations of what is the better one. And then it just needs a little bit of human nudging with only on the order 50, 100 examples to align that prior the model has with exactly with what you want.

(02:05:29)
It looks different than I think normal RLHF where you’re usually training these reward models in tons of examples.

Fields Medal for AI

Lex
(02:05:35)
What’s your intuition when you compare generation and verification or generation and ranking? Is ranking way easier than generation?
Aman
(02:05:45)
My intuition would just say, yeah, it should be. This is going back to… Like, if you believe P does not equal NP, then there’s this massive class of problems that are much, much easier to verify given proof, than actually proving it.
Lex
(02:06:03)
I wonder if the same thing will prove P not equal to NP or P equal to NP.
Arvid
(02:06:10)
That would be really cool.
Lex
(02:06:11)
That’d be a whatever Field’s Medal by AI. Who gets the credit? Another the open philosophical question.
Michael
(02:06:19)
Whoever prompted it.
Sualeh
(02:06:24)
I’m actually surprisingly curious what a good bet for one AI will get the Field’s Medal will be. I actually don’t have-
Michael
(02:06:31)
Isn’t this Aman’s specialty?
Sualeh
(02:06:33)
I don’t know what Aman’s bet here is.
Lex
(02:06:35)
Oh, sorry, Nobel Prize or Field’s Medal first?
Sualeh
(02:06:37)
Field’s Medal-
Aman
(02:06:38)
Oh, Field’s Medal level?
Arvid
(02:06:39)
Field’s Medal comes first, I think.
Sualeh
(02:06:41)
[inaudible 02:06:41].
Lex
(02:06:41)
Field’s Medal comes first. Well, you would say that, of course.
Arvid
(02:06:44)
But it’s also this isolated system you verify and…
Lex
(02:06:47)
Sure.
Sualeh
(02:06:48)
I don’t even know if I-
Arvid
(02:06:49)
You don’t need to do [inaudible 02:06:50].
Aman
(02:06:50)
I feel like I have much more to do there. It felt like the path to get to IMO was a little bit more clear. Because it already could get a few IMO problems and there was a bunch of low-hanging fruit, given the literature at the time, of what tactics people could take. I think I’m, one, much less versed in the space of theorem proving now. And two, less intuition about how close we are to solving these really, really hard open problems.
Lex
(02:07:15)
So you think you’ll be Field’s Medal first? It won’t be in physics or in-
Sualeh
(02:07:20)
Oh, 100%. I think that’s probably more likely. It is probably much more likely that it’ll get in. Yeah, yeah, yeah. Well I think it both to… I don’t know, BSD, which is a Birch and Swinnerton-Dyer conjecture, or [inaudible 02:07:33] iPods, or any one of these hard math problems are just actually really hard. It’s sort of unclear what the path to get even a solution looks like. We don’t even know what a path looks like, let alone [inaudible 02:07:47].
Arvid
(02:07:47)
And you don’t buy the idea this is just like an isolated system and you can actually have a good reward system, and it feels like it’s easier to train for that.
Aman
(02:07:56)
I think we might get Field’s Medal before AGI.
Sualeh
(02:07:59)
I mean, I’d be very happy. I’d be very happy. But I don’t know if I… I think 2028, 2030.
Lex
(02:07:59)
For Field’s Medal?
Sualeh
(02:08:09)
Field’s Medal.
Lex
(02:08:11)
All right. It feels like forever from now, given how fast things have been going.

Scaling laws


(02:08:17)
Speaking of how fast things have been going, let’s talk about scaling laws. So for people who don’t know, maybe it’s good to talk about this whole idea of scaling laws. What are they, where’d you think stand, and where do you think things are going?
Aman
(02:08:34)
I think it was interesting. The original scaling laws paper by open AI was slightly wrong. Because I think of some issues they did with learning right schedules. And then Chinchilla showed a more correct version. And then from then people have again deviated from doing the compute optimal thing. Because people start now optimizing more so for making the thing work really well given an inference budget.

(02:08:59)
And I think there are a lot more dimensions to these curves than what we originally used, of just compute number of parameters and data. Like inference compute is the obvious one. I think context length is another obvious one. So let’s say you care about the two things of inference compute and then context window, maybe the thing you want to train, is some kind of SSM. Because they’re much, much cheaper and faster at super, super long context. And even if, maybe it was 10 X more scaling properties during training, meaning you spend 10 X more compute to train the thing to get the same level of capabilities, it’s worth it. Because you care most about that inference budget for really long context windows. So it’ll be interesting to see how people play with all these dimensions.
Lex
(02:09:47)
So yeah, I mean you speak to the multiple dimensions, obviously. The original conception was just looking at the variables of the size of the model as measured by parameters, and the size of the data as measured by the number of tokens, and looking at the ratio of the two.
Aman
(02:09:59)
Yeah.
Lex
(02:10:00)
And it’s kind of a compelling notion that there is a number, or at least a minimum. And it seems like one was emerging. Do you still believe that there is a kind of bigger is better?
Aman
(02:10:15)
I mean I think bigger is certainly better for just raw performance.
Sualeh
(02:10:21)
And raw intelligence.
Aman
(02:10:22)
And raw intelligence. I think the path that people might take, is… I’m particularly bullish on distillation. And how many knobs can you turn to, if we spend a ton, ton of money on training, get the most capable cheap model. Really, really caring as much as you can. Because the naive version of caring as much as you can about inference time compute, is what people have already done with the Llama models. Or just over-training the shit out of 7B models on way, way, way more tokens than is essential optimal.

(02:10:54)
But if you really care about it, maybe the thing to do is what Gamma did, which is let’s not just train on tokens, let’s literally train on minimizing the KL divergence with the distribution of gemma 27B, right? So knowledge distillation there. And you’re spending the compute of literally training this 27 billion parameter model on all these tokens, just to get out this, I don’t know, smaller model.
Lex
(02:11:20)
And the distillation gives you just a faster model, smaller means faster.
Aman
(02:11:23)
Yeah. Distillation in theory is, I think, getting out more signal from the data that you’re training on. And it’s perhaps another way of getting over, not completely over, but partially helping with the data wall. Where you only have so much data to train on, let’s train this really, really big model on all these tokens and we’ll distill it into this smaller one. And maybe we can get more signal per token for this much smaller model than we would’ve originally if we trained it.
Lex
(02:11:51)
So if I gave you $10 trillion, how would you spend it? I mean you can’t buy an island or whatever. How would you allocate it in terms of improving the big model versus maybe paying for HF in the RLHF? Or-
Aman
(02:12:09)
Yeah, yeah. I think there’s a lot of these secrets and details about training these large models that I just don’t know, and are only privy to the large labs. And the issue is, I would waste a lot of that money if I even attempted this, because I wouldn’t know those things. Suspending a lot of disbelief and assuming you had the know- how, or if you’re saying you have to operate with the limited information you have now-
Lex
(02:12:37)
No, no, no. Actually, I would say you swoop in and you get all the information, all the little heuristics, all the little parameters, all the parameters that define how the thing is trained.
Aman
(02:12:49)
Mm-hmm.
Lex
(02:12:50)
If we look in how to invest money for the next five years in terms of maximizing what you called raw intelligence-
Sualeh
(02:12:57)
I mean, isn’t the answer really simple? You just try to get as much compute as possible. At the end of the day all you need to buy, is the GPUs. And then the researchers can find all… You can tune whether you want to pre-train a big model or a small model.
Aman
(02:13:15)
Well this gets into the question of are you really limited by compute and money, or are you limited by these other things?
Sualeh
(02:13:22)
I’m more privy to Arvid’s belief that we’re sort of idea-limited, but there’s always that like-
Arvid
(02:13:27)
But if you have a lot of compute, you can run a lot of experiments.
Lex
(02:13:32)
So you would run a lot of experiments versus use that compute to trend a gigantic model?
Arvid
(02:13:38)
I would, but I do believe that we are limited in terms of ideas that we have.
Aman
(02:13:44)
I think yeah, because even with all this compute and all the data you could collect in the world, I think you really are ultimately limited by not even ideas, but just really good engineering. Even with all the capital in the world, would you really be able to assemble… There aren’t that many people in the world who really can make the difference here. And there’s so much work that goes into research that is just pure, really, really hard engineering work. As a very hand-wavy example, if you look at the original Transformer paper, how much work was joining together a lot of these really interesting concepts embedded in the literature, versus then going in and writing all the codes, maybe the CUDA kernels, maybe whatever else. I don’t know if it ran them GPUs or TPUs. Originally such that it actually saturated the GPU performance. Getting GNOME Azure to go in and do all this code. And GNOME is probably one of the best engineers in the world.

(02:14:43)
Or maybe going a step further, like the next generation of models, having these things… Like getting model parallelism to work, and scaling it on thousands of, or maybe tens of thousands of V100s, which I think GBDE-III may have been. There’s just so much engineering effort that has to go into all of these things to make it work. If you really brought that cost down to maybe not zero, but just made it 10 X easier, made it super easy for someone with really fantastic ideas, to immediately get to the version of the new architecture they dreamed up, that is getting 50, 40% utilization on their GPUs, I think that would just speed up research by a ton.
Sualeh
(02:15:27)
I mean I think if you see a clear path to improvement, you should always take the low-hanging fruit first, right? I think probably OpenAI and all the other labs that did the right thing to pick off the low-hanging fruit. Where the low-hanging fruit is like, you could scale up to a GPT-4.25 scale and you just keep scaling, and things keep getting better. And as long as… There’s no point of experimenting with new ideas when everything is working. And you should sort of bang on and to try to get as much as much juice out of the possible. And then maybe when you really need new ideas for… I think if you’re spending $10 trillion, you probably want to spend some… Then actually reevaluate probably your idea a little bit at that point.
Aman
(02:16:15)
I think all of us believe new ideas are probably needed to get all the way there to AGI. And all of us also probably believe there exist ways of testing out those ideas at smaller scales, and being fairly confident that they’ll play out. It’s just quite difficult for the labs in their current position to dedicate their very limited research and engineering talent to exploring all these other ideas, when there’s this core thing that will probably improve performance for some decent amount of time.

The future of programming

Lex
(02:16:56)
But also, these big labs like winning. So they’re just going wild. Okay, so big question, looking out into the future: You’re now at the center of the programming world. How do you think programming, the nature of programming changes in the next few months, in the next year, in the next two years and the next five years, 10 years?
Michael
(02:17:20)
I think we’re really excited about a future where the programmer is in the driver’s seat for a long time. And you’ve heard us talk about this a little bit, but one that emphasizes speed and agency for the programmer and control. The ability to modify anything you want to modify, the ability to iterate really fast on what you’re building. And this is a little different, I think, than where some people are jumping to in the space, where I think one idea that’s captivated people, is can you talk to your computer? Can you have it build software for you? As if you’re talking to an engineering department or an engineer over Slack. And can it just be this sort of isolated text box? And part of the reason we’re not excited about that, is some of the stuff we’ve talked about with latency, but then a big piece, a reason we’re not excited about that, is because that comes with giving up a lot of control.

(02:18:19)
It’s much harder to be really specific when you’re talking in the text box. And if you’re necessarily just going to communicate with a thing like you would be communicating with an engineering department, you’re actually advocating tons of really important decisions to this bot. And this kind of gets at, fundamentally, what engineering is. I think that some people who are a little bit more removed from engineering might think of it as the spec is completely written out and then the engineers just come and they just implement. And it’s just about making the thing happen in code and making the thing exist. But I think a lot of the best engineering, the engineering we enjoy, involves tons of tiny micro decisions about what exactly you’re building, and about really hard trade-offs between speed and cost and just all the other things involved in a system. As long as humans are actually the ones designing the software and the ones specifying what they want to be built, and it’s not just like company run by all AIs, we think you’ll really want the human in a driver’s seat dictating these decisions.

(02:19:26)
And so the jury’s still out on what that looks like. I think that one weird idea for what that could look like, is it could look like you can control the level of abstraction you view a code base at. And you can point at specific parts of a code base that… Like, maybe you digest a code base by looking at it in the form of pseudocode. And you can actually edit that pseudocode too, and then have changes get made down at the sort of formal programming level. And you can gesture at any piece of logic in your software component of programming. You keep the inflow text editing component of programming, you keep the control of, you can even go down into the code, you can go at higher levels of abstraction, while also giving you these big productivity gains.
Lex
(02:20:14)
It’d be nice if you can go up and down the abstraction stack.
Michael
(02:20:18)
Yeah. And there are a lot of details to figure out there that’s sort of like a fuzzy idea. Time will tell if it actually works. But these principles of control and speed in the human in the driver’s seat, we think are really important. We think for some things like Arvid mentioned before, for some styles of programming, you can hand it off chatbot-style. If you have a bug that’s really well specified. But that’s not most of programming, and that’s also not most of the programming we think a lot of people value.
Lex
(02:20:43)
What about the fundamental skill of programming? There’s a lot of people, like young people right now kind of scared, because they love programming, but they’re scared about, “Will I be able to have a future if I pursue this career path?” Do you think the very skill of programming will change fundamentally?
Michael
(02:21:04)
I actually think this is a really, really exciting time to be building software. We remember what programming was like in 2013, 2012, whatever it was. And there was just so much more cruft and boilerplate and looking up something really gnarly. And that stuff still exists. It’s definitely not at zero. But programming today is way more fun than back then. It’s like we’re really getting down to the delight concentration. And all the things that really draw people to programming, for instance, this element of being able to build things really fast and speed and also individual control, all those are just being turned up a ton.

(02:21:49)
And so I think it’s going to be a really, really fun time for people who build software. I think that the skills will probably change too. I think that people’s taste and creative ideas will be magnified. And it will be maybe less, a little bit, about boilerplate text editing. Maybe even a little bit less about carefulness, which I think is really important today if you’re a programmer. I think it’ll be a lot more fun.
Lex
(02:22:13)
What do you guys think?
Arvid
(02:22:15)
I agree. I’m very excited to be able to change… One thing that happened recently, was we wanted to do a relatively big migration to our code base. We were using AsyncLocalStorage in Node.js, which is known to be not very performant, and we wanted to migrate to a context object. And this is a big migration and affects the entire code base. [inaudible 02:22:38] and I spent, I don’t know, five days working through this, even with today’s AI tools. And I am really excited for a future where I can just show a couple of examples and then the AI applies that to all of the locations. And then it highlights, “Oh, this is a new example, what should I do?” And then I show exactly what to do there. And then that can be done in 10 minutes. And then you can iterate much, much faster. Then you don’t have to think as much upfront and stand at the blackboard and think, “Exactly how are we going to do this, because the cost is so high?” But you can just try something first and you realize, “Oh, this is not actually exactly what I want.” And then you can change it instantly again after. And so yeah, I think being a programmer in the future is going to be a lot of fun.
Aman
(02:23:26)
Yeah, I really like that point about… It feels like a lot of the time with programming, there are two ways you can go about it. One is you think really hard, carefully upfront about the best possible way to do it and then you spend your limited time of engineering to actually implement it. But I must refer just getting in the code and taking a crack at seeing how it lays out and then iterating really quickly on that. That feels more fun.
Lex
(02:23:55)
Yeah, just speaking to generate the boilerplate, is great. So you just focus on the nuanced, difficult design decisions. Migration, I feel like this is a cool one. It seems like a larger language models is able to basically translate for one program language to another. Or translate, migrate in the general sense of what migrate is. But that’s in the current moment. So mean the fear has to do with, okay, as these models get better and better, then you’re doing less and less creative decisions. And is it going to kind of move to a place where you’re operating in the design space of natural language where natural language is the main programming language? And I guess I could ask that by way of advice. If somebody’s interested in programming now, what do you think they should learn? You guys started in some Java and I forget the… Oh, some PHP.
Michael
(02:23:56)
PHP.
Arvid
(02:23:56)
PHP.
Michael
(02:24:53)
Objective-C.
Lex
(02:24:54)
Objective-C. There you go. I mean in the end, we all know JavaScript was going to win and not TypeScript. It’s going to be like vanilla JavaScript. It’s just going to eat the world and maybe live with PHP. And I mean it also brings up the question of, I think Don Knuth has this idea that some percent of the population is geeks, and there’s a particular kind of psychology in mind required for programming. And it feels like more and more that expands the kind of person that should be able to, can do great programming, might expand.
Aman
(02:25:32)
I think different people do programming for different reasons. But I think the true, maybe the best programmers, are the ones that really love, just absolutely love programming. For example, there are folks on our team who literally when they get back from work, they go and then they boot up cursor and then they start coding on their side projects for the entire night. And they stay up until 3:00 a.m. doing that. And when they’re sad, they said, “I just really need to code.” And I think there’s that level of programmer where this obsession and love of programming, I think makes, really, the best programmers. And I think these types of people will really get into the details of how things work.
Lex
(02:26:29)
I guess the question I’m asking, that exact programmer, let’s think about that person. When the super tab, the super awesome praise be the tab succeeds, and you keep pressing tab-
Sualeh
(02:26:42)
That person in the team loves cursor tab more than anybody else, right?
Arvid
(02:26:45)
Yeah. And it’s also not just… Pressing tab is just pressing tab. That’s the easy way to say it in the catchphrase. But what you’re actually doing when you’re pressing tab, is that you’re injecting intent all the time while you’re doing it. Sometimes you’re rejecting it, sometimes you’re typing a few more characters. And that’s the way that you’re sort of shaping the things that’s being created. And I think programming will change a lot to just, “What is it that you want to make?”
Sualeh
(02:27:17)
It’s sort of higher bandwidth. The communication to the computer just becomes higher and higher bandwidth as opposed to just typing as much lower bandwidth than communicating intent.
Lex
(02:27:27)
I mean, this goes to your manifesto titled Engineering Genius. “We are an applied research lab building extraordinary productive human AI systems.” So speaking to this hybrid element.

(02:27:41)
“To start, we’re building the engineer of the future, a human AI programmer that’s an order of magnitude more effective than any one engineer. This hybrid engineer will have effortless control over their code base and no low entropy keystrokes. They will iterate at the speed of their judgment, even in the most complex systems. Using a combination of AI and human ingenuity they will outsmart and out engineer the best pure AI systems. We are a group of researchers and engineers.

(02:28:12)
We build software and models to invent at the edge of what’s useful and what’s possible. Our work has already improved the lives of hundreds of thousands of programmers.”

(02:28:21)
And on the way to that, we’ll at least make programming more fun. So thank you for talking today.
Arvid
(02:28:26)
Thank you.
Michael
(02:28:27)
Thanks for having us.
Aman
(02:28:27)
Thank you.
Sualeh
(02:28:28)
Thank you.
Lex
(02:28:29)
Thanks for listening to this conversation with Michael, Sualeh, Arvid and Aman. To support this podcast. Please check out our sponsors in the description. And now let me leave you with a random, funny and perhaps profound programming code I saw on Reddit. Nothing is as permanent as a temporary solution that works. Thank you for listening and hope to see you next time.

Transcript for Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America | Lex Fridman Podcast #446

This is a transcript of Lex Fridman Podcast #446 with Ed Barnhart.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Introduction

Ed Barnhart
(00:00:00)
For the vast majority of human existence, we’ve been nomadic and we’ve done these wider or tighter nomadic circles, depending on the geographic region, but they’d move. So once humans figured out how to stay in a place, that’s the initial trigger to what would become civilization.
Lex Fridman
(00:00:20)
I think you said beauty and blood went hand in hand for the Aztec.
Ed Barnhart
(00:00:24)
What I meant by that is they were absolutely comfortable with human sacrifice and ripping people’s hearts out. They had this just grotesque, violent bend, but in the same way, they also absolutely loved flower gardens and poetry and music and dance. The same Aztec king who would order the hearts of 1,000 people extracted also would stand up at dinner parties to recite his own poetry. But they were really just surgical about it. They’d use a thick obsidian knife where they could just break the ribs right along the sternum and then push the sternum down, pull up, and just [inaudible 00:01:11].
Lex Fridman
(00:01:11)
While the person was alive?
Ed Barnhart
(00:01:13)
Yep, while the person was alive.
Lex Fridman
(00:01:17)
The following is a conversation with Ed Barnhart, an archeologist specializing in ancient civilizations of South America, Mesoamerica, and North America. This is the Lex Friedman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Ed Barnhart.

Lost civilizations

Lex Fridman
(00:01:39)
Do you think there are lost civilizations in the history of humans on earth which we don’t know anything about?
Ed Barnhart
(00:01:47)
Yes, I do. And in fact, we have found some civilizations that we had no idea about just in my lifetime. I mean, we’ve got Gobekli Tepe and we’ve got the stuff that’s going on in the Amazon, and there’s some other less startling things that we had no idea existed and push our dates back and gave us whole new civilizations we had no idea about. So yeah, it’s happened and I think it’ll happen again.
Lex Fridman
(00:02:17)
Do you think there’s a loss civilization in the Amazon that the Amazon jungle has eaten up or is hiding the evidence of?
Ed Barnhart
(00:02:27)
Yes, I do. And we’re beginning to find it. There are these huge, what we call geoglyphs, these mound groups that are in geometric patterns. I think that the average Joe, when they hear the word civilization, they think of something that looks like Rome. And I don’t think we’re ever going to find anything that looks like Rome in the Amazon. I think a lot of things there, I mean, wherever you are on the planet, you use your natural resources. And in the Amazon, there’s not a whole lot of stone. What stone is there is deep, deep, deep. So a lot of their things were built out of dirt and trees and feathers and textiles.
Lex Fridman
(00:03:10)
But is it possible that all that land that’s not covered by trees is actually hiding stone, for example, some architecture, some things that are just very difficult to find for archeologists.
Ed Barnhart
(00:03:22)
I think at the base of the Andes where the Amazon connects to the Andes, there’s a lot of potential there because that’s where the stone actually starts poking up. When you get down into the basin, stone is meters and meters under the ground except for a stray cliff here and there where the river dug deep. And even then only in the dry season, because that river rises over 100 feet every year.
Lex Fridman
(00:03:51)
Well, that’s one of the things, having visited that area, just interacting with waterfalls and seeing the water, I was humbled by the power of water to shape landscapes and probably erase history in the context that we’re talking about of civilizations. Water can just make everything disappear over a period of centuries and millennia, and so if there’s something existed a very long time ago, thousands of years ago, it’s very possible it was just eaten up by nature.
Ed Barnhart
(00:04:24)
Absolutely. In fact, in my opinion, that’s almost a certainty in a lot of places. The Grand Canyon was dug by water. There’s this wimpy little river in it right now, and you can’t possibly imagine that it dug that, but it did. The power of nature and geology is really magical. And when it comes to ancient civilizations that could be from a long time ago, there’s probably a lot that are just under the ocean, and just the wave action have destroyed them and what they haven’t destroyed buried deep.
Lex Fridman
(00:04:58)
Under the ocean. So you think Atlantis ever existed?
Ed Barnhart
(00:05:03)
I don’t think that Atlantis existed. I do think it was one of Plato’s many parables talking about putting it in an interesting story as a teaching device in his school. If one did exist or a shadow of it, my money would be on Akrotiri. Akrotiri is what’s left of a big city that was on the island of Santorini, and when their volcano blew up, it blew up most of the city and shot chunks of it so vast that 70 miles away in Crete there are chunks of Santorini in their cliff. So it blasted what was ever there. But what’s left on the side of the crater Akrotiri is strangely advanced for its age. And so if there’s anything that’s a model for Atlantis, as Plato explained it, it’s Akrotiri.
Lex Fridman
(00:06:00)
Akrotiri, the ancient Greek city, it says, “The settlement was destroyed in the Theran eruption sometime in the 16th century BCE and buried in volcanic ash, which preserved the remains of the frescoes and many objects and artworks.” So we don’t know how advanced that civilization was.
Ed Barnhart
(00:06:19)
No, but we can walk around the ruins and see that it’s got streets, it’s got plumbing, it’s got little sconces for torches at night. It was a vibrant city with a lot of, especially in terms of hydraulic engineering, it’s very advanced for being 3,500 years old.
Lex Fridman
(00:06:42)
So if you check it out, here’s an image of the excavation. What a project.
Ed Barnhart
(00:06:47)
It’s an amazing place and you can tell that it’s just part of it because it’s pretty close to where the crater begins. So the city itself was probably much larger.
Lex Fridman
(00:06:58)
So in this case, there’s a lot of evidence, but like we said, there could be civilizations that there is very little evidence of because of the natural environment that destroys all the evidence.
Ed Barnhart
(00:07:09)
Right. And I think Akrotiri’s actually a great example of that because here we have the side that did preserve, that looks amazing, but we know there was more of the city that was completely obliterated. It was shot. Chunks of that city are probably in the walls of Crete 70 miles away, and Plato says that it sunk. It was on an island and it sunk. Well, that’s exactly what happened to Akrotiri.
Lex Fridman
(00:07:35)
Do you think this is what Plato was referring to?
Ed Barnhart
(00:07:37)
If it does exist, at least the model of it, I think this is probably what he was talking about.
Lex Fridman
(00:07:44)
And there could be other civilizations of which Plato has never written that we have no record of?
Ed Barnhart
(00:07:49)
Absolutely.
Lex Fridman
(00:07:50)
And it’s humbling to think that entire civilizations with all the dreams, the hope, the technological innovation, the wars, the conflicts, the political tensions, all of that, the social interactions, the hierarchies, all of that, the art can be just destroyed like that and forgotten, completely lost to ancient history.
Ed Barnhart
(00:08:13)
I reflect upon that often as an archeologist. I think about this great country that I live in and love and all the things we’ve achieved, but we’re a baby historically speaking. We’ve been around 200 years. Heck, a lot of the cities I study in Central and South America, they had a run of 800, 1,000 years, and now they’re ruins. But we’re barely getting started in terms of historical civilizations.

Hunter-gatherers

Lex Fridman
(00:08:43)
So humans, homo sapiens evolved, but they didn’t start civilizations right away. There was a long period of time when they did not form these complex societies. So how do we, let’s say, 300,000 years ago in Africa, actually go from there to creating civilizations?
Ed Barnhart
(00:09:04)
I think that a lot of human evolution had to do with the pressures that their environment put upon them. And a lot of things start changing right around 12,000 years ago, and that’s when our last ice age really ended. I think there was a whole lot of things that just pressured them into, especially, finding new ways of subsistence. Here in the Americas, a huge thing that happened was all the megafauna went away. When the climate changed enough, the mammoths died out and the bison died out, and they had to come up with different ways of doing things. We were hunters and gatherers, and we had things we got from hunting, and we got things we got from gathering. And in the Americas, when the things that they were used to hunting went away and they had to make do with rabbits, the gathering started to be a much more important thing.

(00:10:10)
And I think that led to figuring out, “Hey, we could actually grow certain things.” And gardens turned into crops, turned into intensive crops, and then people were allowed to gather in bigger groups and survive in a single area. They didn’t have to roam around anymore and that’s where we get the first sedentary communities, which means they stayed in the same place all year long. For the vast majority of human existence, we’ve been nomadic and we’ve done these wider or tighter nomadic circles depending on the geographic region where they’d know, “Okay, we’ll be in the summer in the mountains because berries and things, and then in the winter we’ll be down here and we’ll hunt,” but they’d move. So once humans figured out how to stay in a place, I think there, that’s the initial trigger to what would become civilization.
Lex Fridman
(00:11:08)
There’s a lot of questions I want to ask here. What do you think is the motivation for societies? Is it the carrot or the stick? So you said, is it when resources run out, when the old way of life is no longer feeding everybody, then you have to figure stuff out? Or is it more the carrot of there’s always this human spirit that wants to explore, that wants to maybe impress the rest of the village or something like this with the new discovery they made in venturing out and coming out with different ideas or technological innovation, let’s call it?
Ed Barnhart
(00:11:42)
Well, I have an explorer’s heart, so I’m biased. I do think that we have an innate desire to see what’s on the horizon and to impress other people with our achievements, things like that. We’re social beings. That’s really the edge that humans have, is our ability to work together. So I think that it’s much more the carrot than the stick. When things get ugly, the stick comes out, but usually the carrot does the job.

First humans in the Americas

Lex Fridman
(00:12:16)
The really interesting story is how the first people came to the Americas. To me, that’s pretty gangster, to go from Asia all the way potentially during the ice Age or maybe at the end of the ice age or during that whole period not knowing what the world looks like going into the unknown. Can you talk to that process? How did the first people come to the Americas?
Ed Barnhart
(00:12:39)
Well, first off, I agree with you, that was pretty gangster. That’s a hard place to live. I listened to some of your podcasts, that guy, Jordan Jonas, he cut the mustard, but I wouldn’t have made it crossing there.
Lex Fridman
(00:12:52)
Well, there you go. The fact that those guys exist, that somebody like Jordan Jonas exists, people that survive and thrive in these harsh conditions, that’s an indication that it’s possible. So when do you think and how did the first people come?
Ed Barnhart
(00:13:11)
The traditional theories are still somewhat valid, or at least on the table, that when that land bridge occurred, that nomadic hunters just followed the game like they always had and the game went across there because there was no barrier, and they followed them across. The thing that has changed is how early that happened. DNA has been a total game changer for archeology. We get all these evolutionary tracks that we could never see before. When I was a young archeologist, I would’ve never dreamed we’d have the information we have now and that information, a lot of it’s coming out of Texas A&M. We see the traditional 12,500 years ago that there was a migration, but now we’re seeing one that’s almost certainly happening closer to 30,000 years ago.

(00:14:08)
And now the thing that seems like madness but might be true is that it could have been as early as 60. A lot of the DNA things are suggesting that the very first migration could have come across as early as 60. And when I was a younger archeologist, it was heresy to go beyond this 12,500. You were a wacko if you said that, but now it’s really very clear that they came over at least by 30,000 and the bridge opened and closed, then open and closed.
Lex Fridman
(00:14:41)
That’s during the Ice Age?
Ed Barnhart
(00:14:42)
Right.
Lex Fridman
(00:14:45)
I mean, that’s crazy, right? That is crazy.
Ed Barnhart
(00:14:48)
Yeah. I mean, they didn’t roll in and immediately make New York, but there were people. And there were definitely not people here before that, which is fascinating. When the bridge closed, DNA mutated, and so we have specific kinds of haplogroups that are here in the Americas that don’t exist otherwise, and that same haplogroup game has been showing us more and more that people came across Siberia. It’s not Africa. It’s not Western Europe. Those are still, they’ve become fringe theories, but they’re not totally eradicated. DNA is a developing science as well, and I think we all need to keep that in mind, that it’s not like they just cracked the code and now we know all the answers. And sometimes, like in any science, a breakthrough puts us two steps backwards, not forwards. So I think we don’t need to have too much faith in the models that are now being created through DNA, but they are pointing in the direction of everybody came across from Siberia, that all Native American people are of Asiatic descent.
Lex Fridman
(00:16:06)
Do you think it was a gradual process? If it’s like 30,000 to 60,000 years ago, was it just gradual movement of these nomadic tribes as they follow the animals? Or was it like one explorer that pushed the tribe to just go, go, go, go, and go maybe across 100 years travel all the way across maybe into North America, where Canada is now, and then big leaps in movement versus gradual movement?
Ed Barnhart
(00:16:44)
I think it was big leaps. Now, this is just mostly guess, I’ll admit, but I think that much in the way that a lot of our evolutionary models talk about punctuated equilibrium, that there are big moments of change and then it settles out into a more slow and steady pattern, and then something big will happen again. I do think that the early people went as far as they could go, and there were certain colonies that just got isolated for thousands of years. One of the fascinating things that DNA is showing us, which actually blood types were showing us way before that, is that the oldest people in the Americas are in South America, the ones that got separated early and didn’t mix their DNA, like the people in the Amazon.

(00:17:41)
Most of those guys have O-blood type and they’re haplogroup D, which is the oldest one that entered the U.S. And what are they doing down there? I do believe they came across the Bering Strait. We have no real evidence to say they came in mass across Oceania. So they made it probably by boat along the coast all the way to South America.
Lex Fridman
(00:18:11)
So there’s some kind of cultural engine that drove them to explore. So if you had to bet all your money, it happened like tens of thousands of years ago, but in a very rapid pace. There’s these explorers. They went all the way to South America and there established their more stable existence. And from there, South America, Mesoamerica, North America was gradually expanded into that area?
Ed Barnhart
(00:18:40)
I think the next waves came down and did North America and Central America, and the very first wave made it all the way down to South America and got isolated there.
Lex Fridman
(00:18:50)
Isolated.
Ed Barnhart
(00:18:51)
And then mixed in with the next groups that came.
Lex Fridman
(00:18:55)
That’s fascinating.
Ed Barnhart
(00:18:57)
There’s an interesting correlate in Europe where today everybody feels like Celtic people are from Ireland, but actually Celtic people started in Eastern Europe and it was the entire area. And when Rome swept everything and Rome was now the ruler of the day, it was only that far edge of the Celtic world, Ireland, that they were like, “Ah, we’re not going to mess with those guys on that island. We’ll leave them be.” So now it looks like that’s the heart of Celtic tradition, but actually it’s the fringe.
Lex Fridman
(00:19:38)
So if it is 60,000 years ago, these are really early humans?
Ed Barnhart
(00:19:43)
Yeah. And there were consistent things that have been coming out for decades about very old carbon-14 dates in the Amazon and in the Andes area that everybody just dismissed as, “No, it didn’t get a date of 40,000 years.” But I think we’re going to come back around to start readdressing some of these based on new evidence at hand.
Lex Fridman
(00:20:08)
And that’s the interesting thing. The early human spread throughout the world and then, like you said, perhaps have gotten isolated, and then civilizations sprung from there, and they all have similar elements even though they were isolated. That’s really interesting. That’s really interesting that there’s multiple cradles of civilization, not just one. One good idea, those ideas naturally come up. Those structures naturally come up.
Ed Barnhart
(00:20:38)
And I wonder whether the similarities that all those cradles have, it could be a shared much deeper past that they all have, or it could be a more Star Trek thing where Captain Kirk was always talking about the theory of parallel human development, that humans across the universe go through certain stages of development and that, that could be the answer to it.
Lex Fridman
(00:21:09)
Which one do you lean on? Which one do you lean towards?
Ed Barnhart
(00:21:12)
I think it’s a case by case thing. I think if we look globally, I’d lean much more towards the human parallel development. But if I look just to the Americas and we have a shorter time period where the things that become major civilizations, now, I’ll say up to 30,000 years ago, which is still a blip in the time of humans, I think that there were shared things that those people came over with from Asia and that, as they got separated, that they had core values that then turned into things like religion and cultural customs that we can see. I’m a big proponent that there are commonalities in all the cultures of the Americas that lead back to and point to a single distant origin.

South America

Lex Fridman
(00:22:07)
You’ve spoken about the lost cradle of civilization, South America. South America is not often talked about as one of the cradles of civilization, South America, Mesoamerica. Can you explain?
Ed Barnhart
(00:22:21)
Well, we have very early stuff in South America. You’re right. Especially as an American, our country’s so big and we are so far removed from these places, we don’t even think about it. But more and more we’re seeing things that predate the earliest stuff that we like to talk about, like Egypt and Mesopotamia. It’s all on the Peruvian coast that we have these cradles of civilization. Someday we might start talking about the Amazon more and more, but right now what we’ve got are things that date back into the 3000s BCE along the coast of Peru. And there are big stone-built pyramids and temples, and they’re amazingly isolated, even now that we’ve found them.

(00:23:18)
Some of them, Caral is one of the most famous ones just north of Lima, we’ve known about it for a couple of decades now, how old it is. But every time I visit there, it’s like I visited the moon. There’s absolutely nobody there, not for miles. It’s amazing how such a discovery was made, and yet still nobody goes to see it. It’s not easy to get to.
Lex Fridman
(00:23:43)
So you think there’s a bunch of locations like that? Some may not have been discovered in the Peru area.
Ed Barnhart
(00:23:48)
Oh, there are so many. Peru has tons. That desert gets really ugly quick and it buries things completely. There are so many pyramids out there that are still completely untouched. When people hear the name pyramids, they think of Egypt immediately. Egypt has got about 140 pyramids, and we have pretty much found them all. Peru has thousands, thousands of pyramids, and not all of them were built of stone. Some of them were Adobe bricks, which have weathered terribly, so now they’re not exciting places to visit today. What’s funny too, we started off talking about whether I think there’s a lost civilization out there, there are definitely things that are still to be discovered, but there are some things that were discovered 100 years ago and archeologists, or back then they call themselves antiquarians, just passed over.

(00:24:50)
Caral was one of these sites because the coast of Peru has, some of those pyramids that were made by the Moche are full of gold and beautiful ceramics, things that you can sell for big money. But Caral was found a long time ago, but the archeologist was like, “God, no gold, no ceramics. Forget about it. This place is no good. We can’t sell anything here.” And then about the 1970s or ’80s, somebody said, “Hey, no ceramics. Is that older than the invention of ceramics? Shit, we better go take another look at that place.”
Lex Fridman
(00:25:30)
So what’s the dating on Caral?
Ed Barnhart
(00:25:32)
Caral, I think, starts at about 3200 BCE, and it lasts as a major civilization with a lot of other cities around it until about 1800 BCE.
Lex Fridman
(00:25:47)
So what’s the story behind looking at some of these images? What’s the story about constructions like that? What was the idea of that thing? Isn’t that amazing?
Ed Barnhart
(00:25:57)
Gosh, it should be some sort of, I’ll be a flaky archeologist like, “This is a place where rituals took place.”
Lex Fridman
(00:26:06)
It could mean a million [inaudible 00:26:09].
Ed Barnhart
(00:26:08)
So many things we say are so just painfully vague, and that’s about what we got. A place like this, I know the one we’re looking at here, I’ve been here a couple of times, in the pyramid behind it, the rubble’s built in a way where the building won’t rock apart. This is a very earthquake-prone place, but the buildings haven’t fallen because they make these net baskets of rocks inside that all wiggle around and don’t allow the building to fall down. And inside these, we’ve also found a couple of things that were babies, that were human babies that were buried in there. There’s a lot of people that see that and go, “Oh, look at that. They were sacrificing babies, these monsters.”

(00:26:56)
I think a lot of the things that are interpreted as baby sacrifices, Coral’s evidence being one of them, I think it’s more about the tragic nature of infant mortality. In the past, it was a lot more common. There were cultures that didn’t even really properly name their kid until they got to five, because chances were they were going to die. And so I think a lot of these babies that we find in these ceremonial contexts that are interpreted as sacrifices, I think they’re putting them in special places because they mourn the death of their kids, and it just happened a lot more frequently then.

Pyramids

Lex Fridman
(00:27:36)
One of the things you said that really surprised me is that pyramids were built in Peru possibly hundreds of years before they were built in Egypt. Is that true?
Ed Barnhart
(00:27:47)
Absolutely. Absolutely.
Lex Fridman
(00:27:47)
That’s crazy.
Ed Barnhart
(00:27:48)
In fact, there’s one that’s now pushing 6000 BCE. That’s thousands of years before the stuff in Egypt. And that one’s called Huaca Prieta. And it was not an Egyptian pyramid, but it was a pyramid and it was thousands of years before.
Lex Fridman
(00:28:13)
What do you think is the motivation to build a pyramid? The fact that it can withstand the elements structurally, that kind of thing? Why do humans build pyramids and why do they build it in all kinds of different locations in the world?
Ed Barnhart
(00:28:31)
Well, my rude answer is pretty boring, really. A lot of people ask me, “Why are there pyramids all over the planet? Is that a coincidence?” I think that when people wanted to build a big building without rebar or cement, you end up building something with a fat base that goes up to a skinny top, and that turns into a pyramid. Any kid who’s playing with blocks on the floor builds a couple towers and his brother knocks them down, and if he wants one that’s going to stay and be tall, he ends up making something with a fat base and a tiny top. And I think that building something big and tall together is one of those human things like, “We built that. That will be here after we’re gone. People will remember who we were.” If there’s any human commonality, it’s fear of our own deaths and that we were nothing and no one will ever remember us. I think that the first big monuments like that were probably a group of people saying, “We’re going to do something that people will remember forever.”

(00:29:42)
Now, that being said, remember we were just talking about Huaca Prieta and this one that’s almost 6000 BC now, is the first one, that one’s a funny case. We just talked about all these lofty goals, but actually I’m pretty sure that Huaca Prieta’s first pyramid was about capping a smelly pile of trash. I think everybody piled up their trash in the middle of town and it stunk. It’s on the coast. It stunk like fish. And somebody said, “If we just bury this thing with dirt, it won’t smell anymore.” And then it was a big mound where people could get up and talk to everybody and then said, “Well, it’s squishy. If we cap it with clay, then it will really not smell.” I really think that the very first pyramids in Peru were about trash management. Talk about deflating, huh?
Lex Fridman
(00:30:38)
Yeah. But then they probably saw it and they were impressed and humbled by the enormity of the construction, and they were like, maybe the next guy thought, “Maybe we should keep building these kinds of things.”
Ed Barnhart
(00:30:50)
Yeah. Not to jump ahead, but in North America, where they also made pyramids, there’s this interesting evolution where there were these piles of shells along rivers and along the coastlines. People ate a lot of shells. That was an easy thing to collect and eat. So these piles of shells would be near communities, and they probably became landmarks, but eventually they started burying their dead inside those too. Probably, again, about stink and about, “Well, we don’t want the dogs to eat them. Maybe we’ll put them in the middle of the shell pile.” But then that all of a sudden became this, ” That’s where my grandfather’s body is. That’s where great-grandfather’s body is.” And all of a sudden people started being attached to place, not just for the resources, but for the shared memories of their ancestors. So when the very first pyramid was built in the Ohio area by the Adena people, it was built out of dirt, but it’s full of bodies. And I think it’s an echo of an old thing where they used to be putting bodies in shell mounds.
Lex Fridman
(00:32:00)
So where and who were the first civilizations in South America, Mesoamerica?
Ed Barnhart
(00:32:09)
Well, I think we’re still piecing that together. Coming back to the first things we talked about, I think we’re still missing a lot of stuff, especially in South America. It just keeps getting older and older. Part of the reason it’s hard to answer that question is, at what point do we consider people a civilization or a culture? We have in the Americas this long period of time that we call the Paleo-Indian time where they were hunting megafauna. And then when those went away, we get into this even longer period of time called The Archaic, where they’re just hunters and gatherers. Sometimes somebody’s coming up with a cool different kind of arrowhead. They go back and forth with different hunting tools, but really nothing changes for thousands of years and then finally they start developing into these larger groups, which for the most part has to do with agriculture.

(00:33:08)
It used to be archeology that was just the end all, be all. Civilization starts with the invention of agriculture. And we can’t have sedentary communities until people learn how to farm. But that’s been discounted. Peru was a big part of that. That area of Caral, it’s connected to another city on the coast called Aspero. Aspero starts about the same time, but they’re all about fishing. They have no farming. And Caral, who’s upriver from them, is farming, but funny enough, they’re not really farming food. They’re farming cotton and they’re making nets and they’re trading the nets with the people on the coast for the fish. So it’s not as simple as, it’s just agriculture anymore. But it is, I think, still rooted in, how can we feed more people than just our family? How can we together create a food abundance so we’re no longer scared about running out of food?
Lex Fridman
(00:34:09)
So is it possible, which is something you’ve argued, that civilization started in the Amazon, in the jungle versus the coast?
Ed Barnhart
(00:34:18)
I do think so. I think religion in South America began in the Amazon. I think there were people there, very old. Actually, the earliest pottery in all of the Americas, all these places that we have civilizations that grew up, you know where the oldest pottery is? The middle of the Amazon.

Religion

Lex Fridman
(00:34:40)
So there’s interesting cultures developing in the Amazon. So religion, you would say, preceded civilization?
Ed Barnhart
(00:34:47)
In South America, Caral and Aspero that I was just talking about, it’s weird what a dearth of art and any evidence of religion we have. We have those pyramids and things that we call temples, but we don’t really know what went on in there. And there’s no…
Ed Barnhart
(00:35:00)
… Things that we call temples, but we don’t really know what went on in there, and there’s no hints of religious iconography, ceremonies, nothing like that. The first stuff that we get is right when that culture ends, about 1800 BCE. This culture called Chavin starts up and their main temple is up in the Andes in this place of path of least resistance between the Amazon and the coast. It’s about three days walk either way, from this place where this temple is. That’s where we start seeing the very first religious iconography and it’s all over the temples. There are things that are definitely from the coast, but the iconography are all jaguars and snakes and crocodiles, and those don’t come from the coast. All of those things are coming out of the Amazon.
Lex Fridman
(00:35:59)
Religion is a really powerful idea. Religions are one of the most powerful ideas. There are the strongest myths that tie people together. And to you, it’s possible that this powerful idea in South America started in the Amazon.
Ed Barnhart
(00:36:16)
I do. I do think it did, and you’re right, ideas are more powerful than weapons, but archeology can’t see them at all. Sometimes we can see ideas manifesting in the things they create and lead to, but there’s an interpretation problem. Are we right about what idea created this? Those are things that archeology just can’t get at.
Lex Fridman
(00:36:43)
That’s one of the challenges of archeology and looking into ancient histories. You’re trying to not just understand what they were doing in terms of architecture, but understand what was going on inside their mind.
Ed Barnhart
(00:36:55)
That’s really what I’m in it for, trying to understand these people and it’s real detective work, and we know we’re dealing with a totally flawed record. We only have what could preserve the test of time. If we look around this room here, if 2,000 years of weathering happened in this room, what would be left and what would we think happened here?
Lex Fridman
(00:37:21)
Right, right, but not in this room, but if you look at thousands of rooms like it, maybe you can start to piece things together about the different ideologies that ruled the world, the religion, the different ideas. Tell me about this fanged deity. One of your more controversial ideas is that you believe that the religions, there’s a thread that connects the different civilizations, the societies of the Andean region and the religion they practiced is more monotheistic than is currently believed in the mainstream.
Ed Barnhart
(00:38:04)
That is exactly what I think, and I think it’s all about this fanged deity who somewhere, thousands of years ago, crawled his way out of the Amazon up into the Andes and a religion took hold. That could have been a combination of ideas from the coast and the Amazon. But he is the one creator deity, in my opinion, through all of these cultures. And the people in the Amazon still talk about him there. His name is Viho Masse in some groups, but they say that his emissaries on earth are the jaguars and that he is the creator deity.
Lex Fridman
(00:38:41)
Why is the current mainstream belief is that a lot of the religions are not monotheistic?
Ed Barnhart
(00:38:46)
Well, there are bona fide pantheons. Greece had one, Egypt had one, Mesopotamia had one. Lots of the early religions of the old world were pantheons, and I think that was part of the problem. The earliest archeologists walked in there with a preconceived notion that ancient cultures have pantheons. And so they went to the art looking for them, and they came up with things like the shark god and the moon goddess and the sun God, and all these things. But when I look at the art, and I was trained by a person right here in Austin, Texas as an art historian, you follow certain diagnostic traits through art to see the development over time. And when I look at it and use that methodology, there’s a single face with goggle eyes and fangs and claws on his hands and feet and snakes coming off of his head and off of his belt. He’s got really identifiable traits.

(00:39:50)
He also likes to sever people’s heads off and carry them around, but he’s the fanged deity and he’s there. He shows up in Chavín de Juantar, the capital of that Chavín culture, and he keeps showing up through every culture, even thousands of miles away throughout the next two millennium, right up to the Inca. The Inca have a creator deity they call Viracocha, but Viracocha is the fanged deity. When we do see him, by the time you get to Inca, they do this almost Islamic thing where they say you can’t understand the face of Viracocha. So when they do put him in a cosmogram, they’ll make him just a blob, like he’s just unknowable, but he’s at the very top. I think we’re misunderstanding a lot of things that we used to say were deities as just supernatural beings.

(00:40:52)
If we flip the mirror on Christianity and take a look at it, which of course, Christianity is monotheistic, right? It would be heresy to say otherwise, but who are all these other characters? Who are all these angels and demons and Jesus Christ? I don’t even know who the Holy Spirit is, but he’s some sort of supernatural being. But it’s that monotheistic system has lots of things that have supernatural powers that are not God. That’s where I think the crux of us misunderstanding ancient Andean art is.
Lex Fridman
(00:41:29)
So what is the process of analyzing art through time that try to figure out what the important entities are for that culture? Do you just see what shows up over and over and over and over?
Ed Barnhart
(00:41:41)
Well, certainly without the advent of writing, depictions in art have all sorts of meanings encoded in them, and there are certain, what we call diagnostic elements. We can pull apart the same sort of thing like in the Greek pantheon, you know by their dress and what they’re holding, what the different gods are. You can tell Hades from Zeus by the different things they’re holding lightning bolts or tridents or whatever. So they all have these diagnostic elements to them. So that’s how art history goes about analyzing art over time. Once we can put it in a chronological sequence, then we can say, “Okay, here’s a deity here in Chavín culture.” Now we move forward 500 years. Now we’re in Moche and Nazca culture. Where are the deities here? And what I see is that same guy with not just one or two traits, but a whole package of them that shows up again and again and again for thousands of years in each one of these cultures.

(00:42:57)
He’s got circular eyes, he’s got a fanged mouth. He’s got claws on his hands and feet. He’s a humanoid, but he also has snakes coming off of his head like hair and snakes coming off of his belt. And then not so much in Chavín, but as it goes forward, he starts carting around severed heads, human severed heads. So they’re like, in the old literature, the Moche will call him the decapitator deity, but then they have these other like, “Oh, here’s the crab deity and here’s the fox deity.” But if you look at them, the crab deity is just that guy’s face coming off of a crab, and the fox deity is that guy’s face coming off of a fox.

(00:43:47)
So I think on that particular instance, I explain it similar to what Zeus did. You know how Zeus was able to turn into whatever animal he wanted to get with the woman he wanted, and he showed up in all sorts of forms, but he was always Zeus. I think that the fanged deity manifests himself through people and animals throughout the art and that there are missing stories of mythology that we don’t have anymore.
Lex Fridman
(00:44:16)
And across hundreds of years, thousands of years from Chavín to Moja to Inca, as you’re saying.
Ed Barnhart
(00:44:21)
Right. Wari has them too, Tiahuanaco, that famous place, Pumapunku, he’s all over there.
Lex Fridman
(00:44:29)
I wonder how those ideas spread and morph of this fanged deity?
Ed Barnhart
(00:44:35)
I think people walked and proselytized and places like Chavín, there’s a later one in Inca times called Pachacamac that are pilgrimage places where people come in to be healed if they’re sick, but also just to pay homage to the powers that be. So Chavín was a place where people from the Amazon and people from the coast were all coming together. In fact, we saw it in the archeology there. There’s these interesting labyrinths under the pyramids with the fanged deity all over them that have… One labyrinth will have all pottery. The next labyrinth will have a bunch of animal bones. The next one will have a bunch of things made out of stone. So people are showing up and giving this tribute and they’re learning and then they’re going back to their communities. So I think it dispersed from certain pilgrimage spots and became just like pilgrimage spots do. Somebody goes back and they build a temple to the fanged deity.
Lex Fridman
(00:45:36)
Do we know much about the relationship they had with the fanged deity and their conception of the powers of the fanged deity? Were they afraid of the fanged deities and all-knowing God? Is it something that brings joy and harvest or is it something that you’re supposed to be afraid of and sacrifice animals and humans to keep at bay?
Ed Barnhart
(00:46:05)
I think he had two sides of the coin. A lot of the Hindu gods are… One aspect is terrible, the other aspect is lovely. I think he had that same sorts of qualities because we do see him as a fierce warrior taking people’s heads off, and he is a jaguar, which in and of itself implies a certain power and ferocity, but then there are other funny things about him. He is definitely involved in a lot of healing ceremonies and a lot of those healing ceremonies are involved with sex acts. When it comes to the Moche, there’s this whole group of sexual pottery where priests are having sex with women or men, and some of them show their faces transforming into that fanged deity, he is acting through them.

(00:46:54)
But the thing that most cracks me up that shows his softer side is the fanged deity has a little puppy. He has a puppy that’s just dancing around his feet and jumping up on him in various scenes. They see him again and again. Sometimes he’s in these healing sex scenes. In fact, I tracked that puppy from other contexts to these sex scenes where a priest was having sex with somebody in a house and a fanged deity, and there’s a puppy just scratching at the door like, “Hey, you forgot me.” And then finally, one day I found one with the puppy having sex with the woman instead of the fanged deity. I was like, “Oh, he really is very involved in this. What is this weird puppy?”
Lex Fridman
(00:47:40)
Okay.
Ed Barnhart
(00:47:40)
So yeah, he likes to take heads off, but he also has a puppy he adores.

Shamanism

Lex Fridman
(00:47:44)
This awesomely makes sense now because I saw the opening of a paper you wrote 30 years ago on shamanism and Mocha civilization. It reads, “The Mocha are the major focus of this paper. Sex puppies and headhunting will be shown to be related to ancient Mocha shamanism.” So now I understand. I was like, “Well, the puppies.”
Ed Barnhart
(00:48:07)
Puppies, yeah, it’s true.
Lex Fridman
(00:48:09)
And the headhunting. That’s the decapitator.
Ed Barnhart
(00:48:11)
And I’ve added rock and roll to that list since actually. Rock and roll music is also a big part of it.
Lex Fridman
(00:48:19)
Oh, interesting.
Ed Barnhart
(00:48:20)
They call spirits down. There’s this whole spirit world. There’s the ancestors and the people that drink San Pedro cactus juice, they don’t talk about the fanged deity anymore. I think Christianity in 500 years has somewhat put him in the back. It was unpopular to have a pagan deity. So they don’t talk about him much anymore though he’s still around. They’re around Trujillo. They call him Iopec. But music, in the Amazon, they play flute. Sometimes a chorus of women sing and that’s supposed to bring the spirits down into the ceremony. There’s a spirit that’s hurting the person that’s sick, and then the priest or the shaman or the corundero, whatever you want to call him, has his own posse of spirits that are going to help him figure out what’s going on.

(00:49:15)
So when the music starts, that’s bringing those spirits in and people don’t see them unless they’ve imbibed the San Pedro cactus juice, which is this hallucinogen, which is in the Amazon side, it was Ayahuasca. On the coast, it was San Pedro cactus, but that’s what allows you to actually see that other world.

Ayahuasca

Lex Fridman
(00:49:41)
Yeah. I went to the Amazon recently and did Ayahuasca, a very high dose of it.
Ed Barnhart
(00:49:48)
Bold move.
Lex Fridman
(00:49:52)
When in Rome. How far back does that go?
Ed Barnhart
(00:49:56)
Oh, I think longer than anybody can remember, but it’s a natural plant that’s been there forever. I think that it’s thousands and thousands of years. That’s another thing Chavín de Juantara was talking about where I think the things came, the religion came from the Amazon. There’s this wall on the backside that faces the Amazon side. So if you’re entering the city from the Amazon path, you see this wall first, and it’s a bunch of faces that some of them are humans. Some of them are total jaguar and some of them are transforming in-between. But there’s a group of them that are midway through transformation and they show their nostrils leaking out this snot that’s coming down their face. San Pedro doesn’t do that to you, but Ayahuasca does.

(00:50:47)
Ayahuasca traditionally, they’d take a blow gun and just shoot it up your nose or up your ass, but a lot of times up your nose and when it shoots up your nose, the first thing that happens is just this gush of snot comes out of you. And there are stone depictions of people uncontrollably snotting on the backside of this temple from 3,000 years ago.
Lex Fridman
(00:51:12)
So that you think could have been a big component of the development of religion and shamanism?
Ed Barnhart
(00:51:19)
I think that hallucinogens opened the mind then like they opened the mind now.
Lex Fridman
(00:51:25)
Do you think that the stoned ape theory, do you think that actually could have been an actual catalyst for the formation of civilization?
Ed Barnhart
(00:51:36)
In the Americas, yes, I do, though hallucinogens are not part of every ancient tradition in the world. In fact, strangely, the majority of plants that are actually psychotropic, not just mood altering, are from here in the Americas. There are very few drugs that will make you hallucinate outside of the Americas. Of course, now they’re global and they can be grown all over the place. But originally speaking, very, very few were outside of the Americas. So they were part of the experience here in a way that they just couldn’t be in other places.
Lex Fridman
(00:52:17)
I wonder to what degree they were just part of a ritual and the creative force behind art versus literally the method by which you come up with ideas that define as civilization. It’s the degree to which they had a role in the formation of civilizations. It’s fun to think about psychedelics being a critical role in formation of civilizations.
Ed Barnhart
(00:52:46)
I think in terms of South America, they probably really were.
Lex Fridman
(00:52:49)
It’s possible.
Ed Barnhart
(00:52:50)
In North America where we’re in a more northern climb here, and there are less of them, not so much, at least in terms of psychedelics, things like tobacco was always a big part of it. There’s more than one way to reach a hallucinatory state. The hard way is starvation, sleep deprivation. And for the Maya, for example, would go sleep deprivation, starvation, and then they’d cut themselves very badly. And that loss of blood, we believe triggered hallucinations and visions. Nothing to do with drugs. I would much prefer the drugs route.
Lex Fridman
(00:53:31)
It’s the result. The tools aren’t the thing that creates insight. It’s the result.
Ed Barnhart
(00:53:42)
Hallucinogens are poisoning us. They’re killing us. It’s a near death state and people of the Americas believed sleeping was entering that other world, death. You entered this other world and that when you took this mighty dose of poison, it was helping you enter that other world for a period of time.
Lex Fridman
(00:54:04)
Yeah, as Tom Waits said in that one song, “I like my town with a little drop of poison.” So maybe that poison is a good catalyst for invention. So who were the early first mother cultures, mother civilizations in South America? If we look chronologically, is there a label we can put on the first peoples that emerged?
Ed Barnhart
(00:54:33)
That picture is evolving. Forever, it was just the Chavin people that we’ve been talking about. The ones with all the first depictions of religious art were the mother culture, and they certainly did transmit a lot of stuff, but then all of a sudden, we find Kerala. The next one that we’ve barely even begun looking at, but it’s probably older than Kerala, is Sachin culture. I was just poking around there last year and just from the bus on the highway, I could see, “That’s a pyramid out there. Oh, there’s another one.” And I know how old the stuff we have studied there is. It’s again, 3000 BC. We’re just barely beginning to understand them. Kerala frustrates me to no end, the lack of art there. We’ve got stones and bones and not even ceramics to go on, and they didn’t have the courtesy to leave me a bunch of art I can interpret. So I don’t know what those people believed.
Lex Fridman
(00:55:34)
Right. So one of the ways to understand what people believe is looking at the art, the stories told through the art, and then hopefully deciphering if they were doing any kind of writing.
Ed Barnhart
(00:55:44)
That’s our most fruitful place to try to get at this elusive ideas.

Lost City of Z

Lex Fridman
(00:55:50)
And it sucks when they don’t have art. If we just go back to the Amazon, you’ve mentioned that it’s possible that there’s a law civilization that existed in the Amazon, so it’s carried a lot of names. Lost City of Z or El Dorado. Do you think it’s possible it existed?
Ed Barnhart
(00:56:07)
Well, City of Z and El Dorado are in pretty different places. El Dorado, the ideas of where it is center around towards Columbia.
Lex Fridman
(00:56:17)
Okay.
Ed Barnhart
(00:56:18)
And the City of Z is named after a region of Brazil called the Xingu. And so those are an America worth of distance apart. People don’t really think about it on the map, but the entire United States would fit inside the Amazon. That’s how big that place is. And these two are on either end, but both of them have evidence of civilizations. It’s lowland and it floods all the time. So what they did is they’d make these big mounds and then they’d make huge causeways between mounds so they could walk through their cities while they were seasonally inundated. And a bunch of that stuff has been found in the Xingu area, like huge areas that would support tens of thousands of people.

(00:57:10)
Again, it’s not stone built and it’s been under the forest forever. So it’s very torn up, but it’s there. Brazil is big on cattle farming more than ever now, and a thing that I think is completed now is Brazil and Bolivia partnered together and built a highway all the way across and opened up a whole bunch more land, which has found more of these what we call like geometric earthworks. So there’s more and more evidence of these civilizations. It’s not, it could be there. It’s there for sure.
Lex Fridman
(00:57:50)
By the way, the people who are trying to protect the rainforest really hate the highway. One of the things I learned is if you build a road, loggers will come-
Ed Barnhart
(00:58:00)
Yep.
Lex Fridman
(00:58:01)
And they will start cutting stuff down. Now, from an archeology perspective, if you cut down trees, you get to discover things. But from a protective, very precious rainforest perspective, it’s obviously the opposite way. But it is interesting, I’ve seen where loggers cut through the forest and when they leave, the forest heals itself very quickly.
Ed Barnhart
(00:58:27)
So quickly.
Lex Fridman
(00:58:28)
And you just think that across decades, you expand that to centuries and you could see how a civilization could be completely swallowed up by the rainforest.
Ed Barnhart
(00:58:41)
And it happened for sure in the Amazon. One of the ways that we’re trying to push the frontier of where people were in the Amazon, because yes, the trees and just the biomass have eaten so much evidence, but they’re finding more and more of these places that they call terra preta, which is black earth, and they’re huge swaths of it. So I guess the anthropology term is anthropogenic landscapes. And what they’re saying is that that really dark earth couldn’t have just got that way through natural forest processes, that sometime in the distant past that forest wasn’t there and there was major farming and human activity to the point where they totally turned the soil black and it’s much more enriched.

(00:59:36)
And when I took a trip into the Amazon, I went from Manaus, up the river, the Black River a couple of days, and went and met some different communities. And I asked them about this black earth, and they were like, “Yeah, that’s why we’re here. Sometimes we move our village, but when we move, we look for the terra preta, and that’s where we’re going to put our village, because that’s a place that all of our gardens work. The other places, they don’t.”
Lex Fridman
(01:00:04)
One of the things you talked about, literally just you have to ask the right question. And the stories, all the secrets are carried by the people and they’ll tell you.
Ed Barnhart
(01:00:15)
Yeah, there’s so many of them. A thing that excites the world about archeology right now is Gobekli Tepe, and this 10,000, now Karahan Tepe is 11,000. The whole area is called the Tas Tepler. We only found it a couple of decades ago, but it was just an archeologist rowing through the area and ask a sheep herder, “Hey, you guys know where anything ancient is?” “Oh yeah, let me show you this.” And then all of a sudden we’ve got a lost civilization. And the shepherds always knew where it was. Just nobody asked them.

Graham Hancock

Lex Fridman
(01:00:49)
So speaking of Gobekli Tepe, what do you think about the work of Graham Hancock, who also believes that there’s a lost civilization in the Amazon?
Ed Barnhart
(01:00:59)
Well, I’ve met Graham, and personally I like him. He’s a nice guy, got a nice sense of humor, and I think he’s smart. And I also think he is a very good researcher. He and I are working on the same set of facts. The differences are interpretations. I do not believe Graham’s idea that a single, now lost ancient civilization seeded the rest of them. I just don’t see that on a number of levels, artifact wise, technology wise, art, historical analysis. So I think his research is great. I think that he’s very well-read, in fact, better read than a lot of my colleagues, but his conclusions I disagree with. And he and I have talked about this and had a very civil and normal conversation about it and agree to disagree without spitting any venom at any point in the conversation.
Lex Fridman
(01:02:00)
That would be a fun argument to be a fly on the wall for. So he’s proposed that it’s possible that the Amazon jungle is a man-made garden. So it was planted there by advanced ancient civilization. Is there any degree to which that could be possible?
Ed Barnhart
(01:02:21)
Frankly, I agree with him. It’s just like what I was just talking about. It’s the conclusion part that we differ from.
Lex Fridman
(01:02:28)
Sure.
Ed Barnhart
(01:02:28)
But the facts that he’s basing that on are that terra preta are the huge geometric earthworks, are the ever-increasing evidence of them. They are now from the bottom of Bolivia to Guyana. They’re everywhere. Every time we open up the jungle, we find these big works. So yes, there was a vast civilization that was there. How advanced they were is a question and also a perspective thing. Graham really focuses in on what we don’t know and what could be.
Lex Fridman
(01:03:10)
Just to educate me, what’s the key idea that he’s proposing that you disagree with? Is it it was the level of advancement the civilization was, or how large and centralized it was?
Ed Barnhart
(01:03:20)
My main point of disagreement is that his… And his ideas evolve like everybody’s. No scientist or researcher in anything has an idea at the beginning of their career and holds it till the day they die. His ideas are evolving, but his ideas remain. A core of them are that there was a very advanced single ancient civilization that was utterly destroyed by climactic conditions, and the younger Dryas hypothesis is part of that. Most recently, he used to not say that. Now he’s into this meteor thing, but he believes that that civilization was destroyed, but that members of it escaped this cataclysm and then spread out all over the world to seed all of the world’s civilizations for the next revival.

(01:04:19)
There’s where I disagree with him. I think these were independent civilizations that grew up in their own ways, that they were not seeded by some more advanced civilization from the past, and that they all hold things in common because they have this common ancestry of… In his early books, he suggested it’s Atlantis. I don’t think he suggests that anymore, but he still hangs on to the single advanced, now completely lost civilization. And archeologists, all of our ideas are theories. Very few of them are facts, and we could have the story wrong, but one thing we’re real good at is finding stuff. We find fish scales, so I find it just too big a pill to swallow that there was a civilization that was that technologically advanced and that large that we can’t even find a potsherd from.
Lex Fridman
(01:05:22)
Yeah, and of course, it is a compelling story that there’s a single civilization from which all of this came from, because the alternative is the idea that we came across the Bering Strait from Asia went all the way down to South America and got isolated and created all these marvelous, sophisticated civilizations and ideas, including religious ideas that look similar to other… Everybody has a flood myth.
Ed Barnhart
(01:05:58)
Right.
Lex Fridman
(01:05:58)
So there’s a lot of similarities, everybody building pyramids, but there could be a lot of other explanations. And for even if it’s a simple compelling explanation, that has to be evidence for it, and what would that evidence look like?
Ed Barnhart
(01:06:16)
Well, that’s the bottom line.
Lex Fridman
(01:06:17)
That’s tough.
Ed Barnhart
(01:06:18)
Everything’s theories were… And as responsible scientists, we’re trying to disprove our theories. We are not supposed to be trying to prove our theories. That’s one more foot out of the science box that archeology often steps. We’re supposed to be disproving what we think is happening, not proving it.
Lex Fridman
(01:06:38)
You don’t want to lean into the mystery too much. It’s such a weird discipline because you’re operating in… It’s really in a dark room. You’re feeling around a dark room. So it’s mostly mystery. I would say a lot of sciences operate in a mostly well-lit room. It’s like a dark corner and you’re figuring out a way to light it. But yeah, in archeology, most of it is a mystery. Right?
Ed Barnhart
(01:07:08)
Yes, it’s job security. I like that part. But I do also try to always remind myself that every paradigm shifting idea that humans have ever had began as heresy and lunacy. That guy was crazy up to the second. He was brilliant. And so we got to keep our minds open to the things that sound outlandish, because one of them eventually is going to lead us to the big paradigm shift. And if we are busy burning books of ideas that we don’t like, that’s where we close our minds to the possibility of advancing things.

Uncontacted tribes

Lex Fridman
(01:07:48)
I really love that, and I really appreciate that you’re saying that. One of the fascinating things about just the Amazon to me is that there’s still a large number of uncontacted tribes. To rewind back into ancient history, you can imagine all of these tribes that existed in the Amazon that were isolated, very distinct from each other. Can you speak to this, your understanding of these tribes and their history that are still here today?
Ed Barnhart
(01:08:20)
Well, a lot of them are these… By uncontacted, we mean we don’t know anything about these guys. We know roughly where they are, but places like Ecuador have very responsible policies where no one’s allowed to go contact them. So we have a dearth of information. If they walk out of the jungle and talk to us, that’s one thing, but we don’t go out and there looking for them, but they do seem frozen in time, and I don’t think any of us have a good estimation of how long they’ve been like that. But we were saying earlier that humans change based on pressures of their environment. Mother necessity is oftentimes how we invent things or why we change, it’s pressure. And one thing the Amazon is, once you figure out how not to die in it, it’s a paradise of food. Food’s fallen from the sky all the time there, and once you learn to adapt to that environment, you’ve got very little need. There’s no pressure to make anything else. Things are working.
Lex Fridman
(01:09:28)
So for the modern humans that come across these uncontacted tribes, one of the things they document and notice is the propensity of these tribes for violence. So they get very aggressive in attacking whoever they come across.
Ed Barnhart
(01:09:42)
And not just foreigners. They attack each other. The Yanomamo are famous for just having never ending feuds with each other.
Lex Fridman
(01:09:51)
What do you think is the philosophy behind that?
Ed Barnhart
(01:09:57)
I’m a relatively peaceful person, but I’ve got the monster in me like everybody does.
Ed Barnhart
(01:10:01)
I’ve got the monster in me, like everybody does. And I think that these, it’s cultural norms that become institutionalized. For the Yąnomamö, they really, part of the right of passage to be a man is to go kill or maim somebody from an outer village. And they go in there, they oftentimes, the way they don’t let inbreeding set in and ruin everybody, not that they think of it scientifically, but they typically go and steal women from far-off communities, and that starts a big fight.

(01:10:40)
Another thing that starts fights, that when nobody even fought, is illness. Illness in the Amazon and all of the ancient Americas wasn’t seen as a biological thing, it was a spiritual thing. So if somebody in your village gets sick, the question is asked, “Well, what spirit is menacing him and who called it out on him?” And then, the rumor starts, “Well, I bet you it was Joe over there in that other community. He’s still pissed off for that time when we stole his daughter, and we ought to go over there and kill Joe, and then he’ll get better.” And so this round of never-ending violence, like Hatfields & McCoys had that thing, and the people of New Guinea also do that. So there are certain areas, mostly wooded areas, now that I think about it, where people just hide out and they attack each other as a cultural institution.
Lex Fridman
(01:11:42)
It’s such a tricky thing to do, to study an uncontacted tribe, without obviously contacting them, to figure out their language, their philosophy of mind, how they communicate, the hierarchy they operate under.
Ed Barnhart
(01:11:55)
And yeah, there was a fascinating story in Peru, I guess it was probably like eight years ago or something. But there was a ranger from one of the biology stations who, just in the by and by of protecting his area, met one of these uncontacted tribes and befriended someone. Not the whole tribe, but he made some friends who would meet him in the woods, not in their community. And he started to learn their language over a couple years. And so he was this kind of important guy who actually could be the first translator to talk to these people. And one day, a couple of them just came out of the woods, and just plugged him with arrows, and just killed him, and then they went back in the woods. Like, “That’s the one guy who understands what we’re saying, we should kill him and move our village.”
Lex Fridman
(01:12:42)
So those folks really lean into the, as you said, the monster versus the puppy.
Ed Barnhart
(01:12:48)
You know, everybody’s got it, I think. I think we need to listen to our better angels, because if we don’t, we, as a human species, can easily devolve into just using violence against others to get what we want. It’s a daily choice we make not to be savages.
Lex Fridman
(01:13:12)
Which is a fascinating thing to remember. We’re kind of thinking civilized society, we’ve moved past all that, but it can be summoned, like in 1984, the two minutes of hate. With the right words, that primal thing can be summoned, and directed, and lead to a lot of destruction.
Ed Barnhart
(01:13:38)
And our sports are really based on taking those kinds of urges and channeling them positive, where somebody’s not dead at the end of it.

Maya civilization

Lex Fridman
(01:13:48)
Yep. So at which point did what we now call the Maya Civilization arise?
Ed Barnhart
(01:14:01)
That’s another complicated one, another group living mostly in a jungle that we have barely begun to explore. You know, the truth is a lot of the questions in the Amazon and what we’re talking about now is the Patan and the mountains there. Those aren’t places archeologists want to live, they’re horrible. I mean, I’ve been there. I don’t want to live in a tent and eat rations. I want to live in a nice town. So a lot of the places where the answers are, we still really haven’t gotten there, because it takes a special person to be educated enough to know what they’re looking at, and tough enough to want to be there. I’ve done my tour of duty, I’m now in a nice little podcast studio. But seriously, the Maya, the first hint that we see people who are culturally Maya, very close to where the time period for that Chavin culture, is about 1800 BCE.

(01:14:55)
There’s a culture that’s some called the Mokaya, not Maya, but they’re on the Pacific coast, where Guatemala and Mexico connect. It’s called the Soconusco. And those are the first people that are really going to be culturally Maya, and they’re interacting with the culture that has traditionally been seen as Mexico’s mother culture, which is the Olmec. They’re kind of the same thing as we were talking about in South America, where the Maya, the original Maya, there’s not a whole lot to indicate that they have a religion. But the Olmec have this religion they develop, and they start exporting it. And you see the Maya become more and more involved in the religion that’s being created by the Olmec, who are to the north of them, in the swamps of what we call the Isthmus of Tehuantepec.
Lex Fridman
(01:15:49)
I have a lot of questions to ask here about just natural stupid confusion I have. So first, did the Maya or the Olmec come first, and are they distinct groups? How do you maintain a distinct civilization when you’re so close together?
Ed Barnhart
(01:16:10)
I just finished filming a whole thing on the Olmecs and their interaction with the Maya for The Great Courses. I’m thrilled for it to come out next spring. I think they co-evolved. Archeology, in this regard, is the worst enemy of this. So we put these names on cultures, we talk about how they evolve from one to another, we draw these lines where there aren’t any. We make these time periods that a culture magically transforms into somebody with another name, where I’m pretty sure they didn’t care about any of those names. But the Maya and the Olmec are two parts of a larger interaction sphere that’s happening in Mesoamerica, a very dynamic time.

(01:16:53)
The Olmec are really bringing the religion part, but the other areas are bringing technology, ceramic technology, making hematite mirrors, making tools out of obsidian and other stone types. So you’ve got the Olmec in the middle, where Mexico gets skinny, and it gets swampy down there. That’s called the Isthmus of Tehuantepec. That’s where the Olmec are. Then, you’ve got the Maya to the east of them. Then, you have the Valley of Oaxaca, where the people called the Zapotecs, they’re rising up. And then, you have the Valley of Mexico, which will eventually become the Aztecs, but not for millennia. All those areas are interacting with each other.
Lex Fridman
(01:17:40)
Can we just also draw some more lines?
Ed Barnhart
(01:17:44)
Yeah, sure.
Lex Fridman
(01:17:45)
So what is Mesoamerica and what is South America? And what you just said, the Olmecs and the Maya, can we just linger on the geography that we’re talking about here in the… What is this, like 1000 BC?
Ed Barnhart
(01:18:00)
Yeah, the time period we’re talking about, where the Olmec are there, 1000 BC is a great midpoint of it. I’d say it starts about 1800 BCE, and by 500 BCE, the Olmec are gone, and a whole new wave of civilization and population increase happened. In terms of Mesoamerica, looking at your map here, I’d say about halfway through the Chihuahua Desert, up there in the top left, that’s about the boundary of Mesoamerica. There’s this big desert where almost nobody lives, and once you get north enough, you get into the ancestral Pueblo people of what’s now America, the Four Corners area. They’re not Mesoamerican, they have different lives.
Lex Fridman
(01:18:50)
Where does modern Mexico end?
Ed Barnhart
(01:18:53)
Modern Mexico ends, right, you see the name Maya there with the white line around it?
Lex Fridman
(01:18:53)
Yeah.
Ed Barnhart
(01:18:57)
That’s Guatemala, so Guatemala cuts off most of Mexico from Central America.
Lex Fridman
(01:19:02)
Got it.
Ed Barnhart
(01:19:03)
But Mesoamerica only goes about halfway through Honduras, and then it’s really kind of a no man’s land. Nicaragua, Costa Rica, Panama, they really, they’re neither. They’re not Mesoamerica, they’re not South America. They’re more South America, because they’ve got some gold there. But then, basically, you get on the other side of Panama, and you’re fully in South America, with two distinct groups, too. You’ve got the guys that are on the Andes, on the west coast, and then you have the Amazon.
Lex Fridman
(01:19:39)
So the Andes and the Amazon are very distinct.
Ed Barnhart
(01:19:42)
Yep.
Lex Fridman
(01:19:42)
So when you refer to the Andean region, is that referring to the Andes and the Amazon, or just the Andes?
Ed Barnhart
(01:19:49)
Just the Andes and the coast to the Pacific there. That’s Andean civilization.
Lex Fridman
(01:19:58)
So did Maya make it to the Andes, the Andean region?
Ed Barnhart
(01:20:02)
Not that archeology can prove, but it’s almost certain that they interacted with each other. Number one, it’s just, it’s biased to think that these people couldn’t travel as widely as people on the other side of the planet did, but there’s all sorts of hints like that first ceramics I was talking about, that the Maya made, they show up strangely sophisticated technologically already. And down in Ecuador, they had them for 1,000 years before. So a lot of people, myself included, think that the idea of ceramics actually came from South America to the Maya.
Lex Fridman
(01:20:42)
Did the Maya get seeded by the second wave across the Bering Strait, or did that initial wave of people that came and populated South America, were they the ancestors of the Maya? How did the migration happen here? Do we understand that?
Ed Barnhart
(01:21:03)
We’re still piecing it together. You know, I’d be lying if I told you I had the answers. But we do have evidence of Maya stature people. They were small people. Generally speaking, people that grow up in the forest are smaller and people that grow up in the open plains are taller, probably about just generations of people that hit their head on a branch or not.
Lex Fridman
(01:21:23)
You’re joking, but there could be something to that.
Ed Barnhart
(01:21:29)
I think there’s some truth to it. I mean, the Pygmies are small and the people on the plains in Africa are big. The North American Indians are tall and the Maya are small. There is definitely a pattern of smaller people in the forests. But anyway, there’s a cave in the Yucatan called Loltun Cave that has hand prints in the cave. It’s somebody who put their hand on the cave and spit charcoal around their hand, like a negative print. We can date that charcoal, and it comes from 10,000 years ago, and the hands are all small. It’s typical Old Mexico. I walked right up to these things and could put my hand… I didn’t mess with them, but I put my hand next to these hands, and they’re all smaller than my Northern European hand, and so either it was a bunch of kids who were in this cave 10,000 years ago, or it was people of Maya stature who did it.
Lex Fridman
(01:22:29)
It’s so cool that you can date the charcoal, and it’s so cool that 10,000 years ago there are people leaving [inaudible 01:22:37]-
Ed Barnhart
(01:22:37)
And actually, we have one that’s I think 2,000 years older now, just a couple years ago, again in Yucatan, in a cave, they found a woman they named Naia now, and she’s like 12,000 years old.
Lex Fridman
(01:22:52)
So the best guess maybe that you have is it goes across the Bering Strait to South America, possibly the Amazon, develop a lot of cool ideas in the Amazon, and started drifting back up into Mesoamerica?
Ed Barnhart
(01:23:07)
Was kind of a co-evolution, the technology of ceramics I think got there through an interaction with-
Lex Fridman
(01:23:15)
See, the interesting thing is that the Maya didn’t really have religion, didn’t have as a vibrant religious set of ideas, and they borrowed it from the Olmec.
Ed Barnhart
(01:23:25)
I’ve been doing a deep dive on this for this Olmec course that I just did, and it really does seem like these other cultures that have jade, and hematite, and obsidian, the Olmec had none of that stuff. They were living in a swamp, and building things out of dirt, but they were importing those materials from those areas, carving them into all sorts of religious iconography, and then exporting them back to them.
Lex Fridman
(01:23:55)
And still, the fanged deity show up [inaudible 01:23:58]-
Ed Barnhart
(01:23:57)
No, the fanged deity is nowhere in Central America and Mesoamerica, that’s why… There’s jaguars, there’s jaguar iconography, but it’s not the same thing. This whole jaguar transformer deity does not exist there. They do have a pantheon.
Lex Fridman
(01:24:15)
So the Maya, the Olmecs are the interesting peoples of the regions. I’d love to ask questions about who were they? So one question I’m curious about, what was their sense when they looked up at the stars? What was their conception of the cosmos?
Ed Barnhart
(01:24:33)
That’s a question I’ve spent my entire career trying to answer. I think that they saw it as proof of the cyclical nature of life, and certainly, they saw, like every ancient group did, like, “Are those the gods? Why are those things far away?” But I think that the Maya especially looked at it with a much more mathematical mind than most did. And so they watched these things move every night, and if you do that even today, you notice that all the stars move in tandem. They’re just this blanket, they’re like this curtain behind me. They’re the stage upon which some very important players are dancing, and that’s the Moon, the Sun and the planets.

(01:25:24)
There’s five planets we can see visibly. So they started watching, like, “Why are just those seven moving differently than the rest?” And those are the things that they keyed on mathematically. The Sun, of course, was also involved in the agricultural cycle, so that was important in and of itself. But the planets, we can see them coming up with ideas, definitely doing the math, and seeing that there is a repeated cycle, and then coming up with mythology around them, like Venus for them was associated with war, and they had very ritualized times to go to war that had something to do with Venus.

(01:26:07)
Sometimes, in the classic period Maya, it was the first appearance of Venus as the Morning Star. That was a good time to go to battle with your neighbors. And when it became the post-classic, with Chichén Itzá being the capital of the Yucatan, then it looks like, if you watch Venus day after day, it goes slowly up every day, and then when it hits its highest point as Morning Star in the morning, it goes down to the Earth like three times as fast. All of a sudden, it just shoots down and hits the Earth. And so the people of post-classic Maya civilization saw that as the gods shooting a spear into the Earth, and that was a good time to attack your neighbors. That was like war time, when the spear is going to hit the earth.
Lex Fridman
(01:26:58)
All right, so this is fascinating. They just had at the foundation, a sense that life, existence at the various timescales is cyclical.
Ed Barnhart
(01:27:10)
Yeah.
Lex Fridman
(01:27:11)
That’s a starting point, and then you just look out there, and if you’re extremely precise, which is fascinating, how precise they were, you can just measure the cycles.
Ed Barnhart
(01:27:21)
Yeah, and they did it really well. Now, of course, they are the only ones to develop a fully-elaborated writing system in all of the Americas. The South America had the quipu, but it’s so different than our writing. We’re still trying to figure out what the heck it is. We know there’s math there, too. But they had the ability to take a lifetime worth of measurements and hand it to the next generation, who would then do it more and do it more.

(01:27:48)
That’s how they figured out kind of the Holy Grail of ancient astronomy. How good were they was whether they could see the procession of the equinoxes, the fact that we’re just barely wobbling, and there’s a 26,000-year period where the stars as that backdrop will spin all the way around and come back. It’s 26,000 years. But the Maya we’re able to figure out, “Wait, it’s moving one degree every 72 years,” and did a calculation based on where it should be in the ancient past, and they were using constellations. They’re showing us they know by saying like, “This planet’s in this constellation right now, and 33,000 years ago, it would be in this constellation.”
Lex Fridman
(01:28:39)
It’s just fascinating that they were able to figure this out. I would love to sort understand the details of the scientific community, if you can call it that.
Ed Barnhart
(01:28:50)
I think we absolutely could, and that’s actually one of the things that I’m hoping to move the needle on in my generation, with my career, is to give these cultures the respect they deserve, as standing toe to toe with the rest of our ancient civilizations we respect. There are things that should be called science that are not being called science at the moment. Their math is incredible, their hydraulic engineering is incredible, their chemistry is incredible, and so I hope to talk about these things differently, as a way to get people to recognize the achievements in a different way.

Mayan calendar

Lex Fridman
(01:29:33)
Yeah, I mean, unquestionably incredible scientific work in the astronomy sense, especially here. Can you speak to all the sophisticated aspects of the Mayan calendar that they’ve developed?
Ed Barnhart
(01:29:47)
Don’t know, you got another five hours?
Lex Fridman
(01:29:49)
Let’s go.
Ed Barnhart
(01:29:51)
No, I’m kidding.
Lex Fridman
(01:29:52)
I should say that you also gave me the 2024 Mayan calendar.
Ed Barnhart
(01:29:58)
Yeah, I do this just to show the world that calendar system is evergreen. It can go into the future or the past for billions of years in the system they made, just like our system is.
Lex Fridman
(01:30:11)
So can you speak to the three components here as I’m reading? The Tzolk’in, the Haab, and the Long Count, what are these fascinating components of the calendar?
Ed Barnhart
(01:30:20)
It’s neat how obsessed… They were really math nerds. It wasn’t good enough for them to just make one cycle to describe time. They had all these cycles that interlocked into each other, like cogs in a machine, though they never thought of it like that. But the Tzolk’in is their oldest one, and the one that still endures today. There are millions of Maya people that are living their lives based on a 260-day count. No weeks, no months. It’s just 13 numbers combined with 20 day names, for a total of 260 days, and then it goes again.

(01:31:01)
Everybody in the highlands knows what their birthday is in that calendar, knows what it means about their personality and the kind of jobs that they’re supposed to do. Each one of those days has their own spirit and what’s supposed to happen in those days. The Maya collectively call them the Mom, the Grandmother, Grandfather spirits, and they talk to each one of those days, and they pray to them. There’s now an association of some 8,000 people that are called [inaudible 01:31:33], that are daykeepers who are keeping the days, and they’re also like community psychologists, almost. People come to them and say, “You know, my life is mixed up. What’s wrong here?” “Well, let’s ask the Mom. Okay, well, it looks like you’re not doing this or that, or you know what, you’re an accountant? You’re not supposed to be an accountant. You’re supposed to be a midwife. What are you doing? You’re living your life wrong. You’re a Kibʼ. You need to start being a Kibʼ person.”
Lex Fridman
(01:32:02)
So they take extremely seriously the day on which you’re born, what that means, the spirit that embodies that day?
Ed Barnhart
(01:32:08)
Right. Like, I’m Kibʼ, I’m 13, Kibʼ, and it’s funny how accurate a lot of them are. Mine is basically, is I’m an irresponsible husband and parent, but people like me, so my family still prospers. Like, well, God, that’s horribly accurate.
Lex Fridman
(01:32:29)
I mean, some of it is also the chicken or the egg. If you truly believe, if you’ve structured society where this calendar is truly sacred, then it kind of like, the spirit does manifest itself in the life of the people that is born on that spirit’s day.
Ed Barnhart
(01:32:48)
Absolutely.
Lex Fridman
(01:32:49)
It’s interesting.
Ed Barnhart
(01:32:50)
And the Maya really feel this, in this system. So that’s the core system. This 260-day calendar was the very first calendar they made thousands of years ago, and it’s the one that’s most important today.
Lex Fridman
(01:33:02)
Why 260 days, by the way? Is there a reasoning behind it?
Ed Barnhart
(01:33:08)
Most Maya agree with this today, and who knows what the original architects, thousands of years ago were thinking, but it’s nine months, it’s the human gestation period. So if you conceived on the day 13, monkey, chances are your kid’s coming out on or near 13, monkey, and I think it’s beautiful. I mean, if that’s right, that means the Maya and the people of Mesoamerica will all share it together, when they thought about, “We need a count of time for us,” they didn’t look up into the heavens, they looked into their bodies. “What’s the first cycle that we actually go through as humans?” and they picked this nine-month thing. It really is our cycle, and no other culture on the planet looked inside themselves to create their calendar like that.
Lex Fridman
(01:34:05)
So that’s the oldest one and the sacred one that still carries through to today. What’s the second one, the Haab?
Ed Barnhart
(01:34:12)
The Haab is the solar calendar, the one that everybody on the planet eventually comes up with. We know it’s second, though, because when they start talking about it, they use all the symbols and the numbers from the 260 one. They say, “Well, we need a solar one, too. Let’s just keep counting this another 105 days, and we’ll get to 365.”
Lex Fridman
(01:34:33)
Oh, interesting. They kind of carry the same.
Ed Barnhart
(01:34:35)
Right.
Lex Fridman
(01:34:35)
Got it, got it, got it, got it. And that’s useful, for all the sort of agriculture, all those kind of reasons?
Ed Barnhart
(01:34:42)
Right. Though, interestingly, they never put a leap year in. The Haab is also called the vague year, because it’s just 365, which means every year, they’re off a quarter of a day, and eventually, it starts really adding up. In fact, it’s even caused modern problems. In this calendar here, I just do the straight math from 1,000 years ago. And so I place the beginning of the solar year differently than some Maya groups do, especially the guys in the highlands of Eastern Guatemala. They write me nasty emails saying, “I don’t know what time the year is,” but their relatives changed it in the 1950s, because their agricultural cycle was so far off. They moved it 60 days back to make it in the spring again, but it drifts, which is strange, because it’s not a very good thing for the agricultural cycle. It’s one of these mysteries we still don’t have an explanation for.
Lex Fridman
(01:35:46)
So that’s the Haab, and then what’s the Long Count?
Ed Barnhart
(01:35:49)
The Long Count’s their really mysterious, cool one, because it’s a linear count of days, which are not like them. It’s a bunch of cycles, like ours. You know, our weeks are a cycle, our months are a cycle, but it’s weird in that its estimation of the year in the Long Count system is only 360 days, so it’s miserably off a solar year. They count in base 20, so like we count in 10s, we’re decimal, they count in base 20 vigesimal.

(01:36:25)
And so it should be there’s 1s, there’s 20s, there’s 400s, there’s 8,000s, there’s 160,000s. It goes just like our 10s, 100s, 1,000s, 10,000s, but it’s times 20. So they have days, months of 20 days, and then they have these years that should be, by their math, 400, but it’s only 360. And that throws the whole thing out of whack going further up. Then, they have a 20-year period and a 400- year period. 400 years to their calendar, but by that time, it’s only 396 years in our reckoning. So it’s mysterious that it’s… Why did they tweak it at the year to be only 360 days? That doesn’t follow any astronomy, that’s not the human cycle.
Lex Fridman
(01:37:23)
Yeah, but it’s interesting that they build up towards thinking about very long periods of time, like baktuns is 144,000 days.
Ed Barnhart
(01:37:34)
Right, ar a baktuns is 400 of the Long Count’s years, so it’s kind of like our millennium. You know, we think it’s a big deal when we hit a millennium or a century. They have a 20-year period that they do a lot of celebrations on, called a k’atun, and then they have the 400 baktun, which is the big one. That’s like their millennium, and 13 of those baktuns occurred in the creation before us. They also think that the world has had multiple creations. They’re not alone in that. There’s lots of ancient civilizations who say that, but we’re technically in the fourth creation.

(01:38:18)
And they have a creation story called the and the Popol Vuh, and the Popol Vuh is clear as day that the third creation ends with the help of these heroes called the Hero Twins, and the fourth creation begins. And so on the Maya monuments, we see them doing the math through the Long Count, and we can calculate it back very exactly. It happened, the fourth creation started on August 11th 3114 BC. And it doesn’t say it’s day one, it says it’s the last day of the 13th baktun of the third creation, which leads us to believe that a creation is only 13 baktuns long.
Lex Fridman
(01:39:08)
Right, and this would be the fourth creation? The calendar starts-
Ed Barnhart
(01:39:13)
This is the fourth creation. But if you do the math, going from 3114 BC, and count 13 baktuns forward, you get to 2012.
Lex Fridman
(01:39:25)
And hence, the very popular notion, the 2012… Whenever that was December, something like-
Ed Barnhart
(01:39:32)
December 21st 2012.
Lex Fridman
(01:39:34)
… will be the end of the world.
Ed Barnhart
(01:39:36)
Right.
Lex Fridman
(01:39:36)
So can you explain this?
Ed Barnhart
(01:39:38)
Those were very fruitful years for me. I had so many lectures around the country that it’s like Garrett Morris in Saturday Night Live. The apocalypse was very, very good to me.
Lex Fridman
(01:39:54)
Ah, yeah, but that is pretty interesting. So technically, it would be, what, in the fifth? No.
Ed Barnhart
(01:40:00)
Yeah, technically we’d be in the fifth, though my argument was that, actually, if you look through all the corpus of Maya mathematics and calendars, they never say anything like that. In fact, there’s a handful of dates that tell us that the fourth creation does continue farther on, that that baktun place should have 20 baktuns in it, like their counting system would dictate, not 13. And there’s a place in Palenque, there’s a place in the Dresden Codex, and one other place I’m forgetting, that all talk about time after 2012. So how does that happen? It’s a conflict.
Lex Fridman
(01:40:49)
Is there supposed to be an overlap of the… So it’s like 13 is the core of it, and it’s 20 long?
Ed Barnhart
(01:40:57)
They love the number 13, it’s all over the place. It’s a magic number to them. My explanation, which I admit is not very solid, but I think that the magical deeds of the Hero Twins, in their creation story, at the end of the third creation, hit the magical reset button, and that it just restarted time right there, because of their magic, but that was not to say that the natural baktun cycle should be 13. And there are certain texts that go way forward in time or way backward in time, and whenever they want to do that, there are higher increments than just the baktun.

(01:41:47)
Above that, there’s the piktun, then there’s the kalabatun, then there’s alawatun, and it goes on and on. And these are like 160,000 years, huge increments of time. Whenever they want to do that, and they talk about a long period of time, they start putting 13s in all of those increments, those higher increments. And I think what they’re saying is they’re making an esoteric statement about the never-ending nature of time. That’s what I think they’re telling us in those texts, that time goes on forever, magically.
Lex Fridman
(01:42:24)
But they still had a conception that it didn’t go on forever before, right? That there was other civilizations that came before in there, and this is the fourth creation?
Ed Barnhart
(01:42:35)
This is the fourth creation, and the gods made everybody. The first ones made of mud and they melted. The second ones were made of sticks, but they were jerks to the animals. The third ones were like us, but flawed in some other way. And then, we’re finally made of the blood of the gods and corn. We’re made out of corn, so we’re perfect. And as it explains to us, the Popol Vuh does, we got it right this time. There’s no reason to believe that this creation has a set duration.

(01:43:18)
One of the weird things is that the Aztecs, who we talked to a lot at contact, they also had the concept of multiple creations before us, but they were real clear to the Spanish that they weren’t all the same time element. Some of them were in the three hundreds of years, some of them were in the seven hundreds of years, but they were not the same time period. So our mathematical logic that if the third creation was 13, this one must be third creation, or also be 13, it’s in direct opposition to what the Aztecs told us about the nature of creations. They’re different time periods.
Lex Fridman
(01:44:01)
Why do you think there was the myth of the previous creations? Did they have some kind of long, multi-generational memory of prior civilizations?
Ed Barnhart
(01:44:13)
It may have had some echo in the flood myths.
Lex Fridman
(01:44:17)
Right, so same? It’s the same kind of major myths carried through long periods of time?
Ed Barnhart
(01:44:23)
There’s a lot of different opinions about it. And if they were all 13, if we have 5 creations, like the Aztecs said, and they were all 13, they would come up to roughly 25,000 something years, which is very close to that processional cycle. So some people are like, “They designed it all to be one completion of the procession of the equinoxes.” I don’t believe that one, but that one sure sounds good, doesn’t it?
Lex Fridman
(01:44:53)
Yeah.
Ed Barnhart
(01:44:53)
That’s going to get a lot of internet hits.

Flood myths

Lex Fridman
(01:44:57)
And one of the things I do obviously wonder about is why-
Lex Fridman
(01:45:01)
Wonder about is why the flood myth is part of most societies and most religions.
Ed Barnhart
(01:45:10)
I think that one’s pretty easy. It’s the end of the ice age, when the bathtub filled back up.
Lex Fridman
(01:45:17)
So it’s just the ice age bathtub refilling.
Ed Barnhart
(01:45:19)
It’s the seas filling back up.
Lex Fridman
(01:45:24)
And they, without really understanding what happened, they just carried that story.
Ed Barnhart
(01:45:30)
Everybody knows that everybody’s nice coastal village went under water and they had to seek higher ground.
Lex Fridman
(01:45:38)
And then just like people talking about the weather, everybody was talking about the weather for many generations as the sea level was going up, and then that myth carried.
Ed Barnhart
(01:45:47)
“Why do we live here, grandpa?” “Well, we used to live over there, but then the water came.”
Lex Fridman
(01:45:54)
And then many grandpas later is just kind of permeates every idea.
Ed Barnhart
(01:45:58)
It becomes mythology, but global mythology. So that one, there’s a lot of things I don’t have a reasonable explanation for, but the flood myth is almost certainly the rise in sea level.
Lex Fridman
(01:46:12)
So this idea that every day represents, carries a spirit. There’s modern day astrology. Most people kind of consider astrology this maybe a bit unscientific woo-woo type of set of beliefs, but do you think there’s some wisdom that astrology carries? From your scholarship of the Maya calendar, do you think if we carry that to the astrological perspective on the world, do you think there’s some wisdom there?
Ed Barnhart
(01:46:48)
I don’t know. I have a woo-woo part of me. I would like to believe that stuff. But I don’t think as a scientist, I cannot come up with a biological scientific reason why that would be true. And when you look at it objectively, I mean really? Is everybody born with the sign Scorpio a moody person? That’s just objectively not true.

(01:47:24)
But it is funny how oftentimes these Maya horoscopes, for lack of a better word, do hit the mark. There was some student who surveyed like 300 people with the app I made and asked them about their Greek sign and their Maya sign, and his conclusion for his term paper was that the Maya one was working way better, which that’s fascinating. At least that’s fun. But no, I think I’m too much of a scientist to believe that. I just don’t have any foundation in science that would allow us to believe that the month in which we were born in a cycle sets our personality and destiny.
Lex Fridman
(01:48:09)
I agree. And yet there’s so much mystery all around us that … What I do like is the inbuilt humility to that worldview, that there’s this whole, you can call it a spiritual world, but a world that we don’t quite understand. And then you can wonder about what is the wisdom that that world carries. And then you can construct all kinds of systems to try to interpret that, and then there is where the human hubris can come in. But it’s good to be humbled by how little we know, I suppose.
Ed Barnhart
(01:48:46)
I do love the mysteries of the world. And I would love to find an ancient civilization, but I don’t want to solve the mysteries of the world. I think they’re one of the things that make life worth living.
Lex Fridman
(01:49:00)
That’s true. That’s true. You mentioned the Maya writing system. What are some interesting aspects of their language that they’ve used in the written language that they used?
Ed Barnhart
(01:49:12)
Well, one of the things that confound me as a guy who’s spent a better portion of my life studying it, I had the honor of being the student of Linda Schele right here at the University of Texas at Austin. She got the group together who broke the Maya code of hieroglyphics in the 1970s. So I learned from the best and loved every minute of it. I miss Linda.
Lex Fridman
(01:49:36)
Can you speak to that code actually, the hieroglyphic code and what it takes to break it?
Ed Barnhart
(01:49:40)
Oh boy, what a thing. We had kind of a Rosetta Stone. We had a page out of Diego de Landa’s book. A priest who was converting the Maya in Yucatan asked his informants about their writing system and what every sound meant. And he was convinced they had an alphabet like we do. So he got this Maya guy, sat down in Spanish, and he said, “Okay, you’re going to write all the symbols right here in my book. Write an ah here, write a be here, write a ce here.” And that guy just wrote all of the sounds that the priest told him to write. They were actually syllables. They were vowel consonant combinations. They weren’t an alphabet, but that turned into our Rosetta Stone of sorts.

(01:50:34)
The big key is that the Maya still speak that same language. There are millions of Maya people who are speaking a version of Maya. Now there’s where I get confused, that we’ve got a single writing system that is intelligible, we’ve broken the code, so we know that it’s basically the same writing system from the top of the Yucatan into Guatemala and El Salvador. But we have 33 Maya languages today that are mutually unintelligible. And we backwards project the language of what they spoke back then that the glyphs are in to something called Chʼoltiʼ, which is a combination of Chʼortiʼ and Ch’ol, two of those languages.

(01:51:20)
But it doesn’t work for me at all. If there was one language, maybe two back then, how did it flower into 33 mutually unintelligible languages in just 500 years during acculturation and horrible infectious diseases that killed 90% of the population? How did that happen? So we’re missing something huge here. I think it’s more like Chinese, where Chinese letters, writing can be read in multiple languages and still understood. I don’t know exactly the mechanics of how that would happen, but it just seems impossible that there are more languages, not less languages, in the Maya area after the last 500 years that they’ve been through.
Lex Fridman
(01:52:10)
So you think that there’s some kind of process of either rapidly generating dialects or there always has been these dialects, or I should say they’re distinct languages, even though there was a common writing system?
Ed Barnhart
(01:52:23)
There must have been a way that multiple languages understood the same writing system. Or maybe there was something like Latin. You know how there was a period in Europe where most people were illiterate and there was this priesthood who all understood Latin and they wrote in Latin? Maybe the hieroglyphs represent a kind of Latin in the ancient Maya world.
Lex Fridman
(01:52:51)
But we don’t really know, and there’s not clear evidence to fill in the gaps of how it’s possible to have that.
Ed Barnhart
(01:52:56)
Right. But we did realize, it was actually a Russian scholar named Yuri Knorozov who broke the code. The Americans and the Europeans were absolutely sure that the written language was a dead language. But Yuri not knowing any of that, not being filled with all of those thoughts from America and Europe, went about it in the way that he was taught in his grad school in Moscow and just went to the dictionaries. And he looked at Yucatec language that they’re speaking today, and he applied it to the symbol system, and he knew that there were certain sounds. He used Landa’s alphabet.

(01:53:45)
His two key examples were a picture of a dog with a symbol over it and a picture of a turkey with a symbol over it. And the dog, a dog in Yucatec is tzul. So he saw two symbols and he said, “This one’s probably tzul and this one’s ul”. And then the Turkey was kutz, so it would be ku ending in tz. And he showed how, look, this is, this is tzul. Those two things that should be tz are the same symbol. And that began this process of unraveling the syllables that we’re still working on today.
Lex Fridman
(01:54:25)
That’s fascinating. Just that decoding process is fascinating. How do you even figure that out? And there’s probably still, are you aware of any written languages that haven’t been decoded yet?
Ed Barnhart
(01:54:38)
Yeah, there’s a number of them. There’s Easter Island script. I was just talking to, we’ve apparently made a few advances there now. It’s called Rongorono. And we only have about maybe 25 examples of texts, but we’re beginning to break that.

(01:54:55)
There’s also, the big one is Harappan. For a long time we used to say there were five independent scripts on the planet, and those were Chinese, Cuneiform, which is Mesopotamian, Egyptian, Maya, and then Harappan, which is from Northern India. That’s the only one that we’ve never cracked. And now all the epigraphers, the people, that’s the term, epigraphy is translating these languages, they’re all ganging up on Harappan and want to kick it off the list because we can’t break it. It had a big enough symbol set, but no one’s been able to crack it. And now they’re saying it’s just an elaborate symbol set and doesn’t reflect the spoken word.
Lex Fridman
(01:55:45)
That’s a hypothesis, which would explain why it’s so difficult to break.
Ed Barnhart
(01:55:52)
But we could just be faced with a quitter generation. Maybe somebody will pick up the baton next generation.
Lex Fridman
(01:55:56)
Kids these days.
Ed Barnhart
(01:55:59)
The other one that fascinates me is from the Americas. It’s the quipu. The Inca had the quipu, this knotted string records, but it was definitely encoding more than just math. We know the math. I can do the math quipus and figure out what they’re totaling of things. Yeah, there’s a quipu right there.
Lex Fridman
(01:56:19)
“Quipu are recording devices fashioned from strings historically used by a number of cultures in the region of Andean South America. A quipu usually consists of cotton or camelid fiber strings.” So there’s a set of strings and they’re supposed to what, to be saying something?
Ed Barnhart
(01:56:32)
There’s one long string that the little ones dangle off of. And each one of the dangling strings have sets of knots on them. And the knots, some of them are mathematical quipus, and those, we can just do the math. We can prove that it’s math.

(01:56:49)
They also encoded language in there. They had entire libraries in Cusco where Spanish conquistadors were brought through, and the caretakers of the libraries would just, they’d say, “Pull that one down, read that one to me.” And he’d pull it out and just read a history of something that happened 200 years earlier. So it was definitely writing.

(01:57:11)
But in the 1570s, one head of the church there had all of the people that could read them called quipucamayocs, gathered up, had them read all of their quipus and transcribe them into Spanish books, and then had the quipus burned and those people murdered.
Lex Fridman
(01:57:32)
Well, there you go.
Ed Barnhart
(01:57:33)
And so we can’t break the code still today, but we know it was absolutely a written language. Though it wasn’t written, it was weaved or knotted.
Lex Fridman
(01:57:45)
And there’s still some quipus available that could be-
Ed Barnhart
(01:57:48)
I think now we’ve just crossed the 1,000 mark. So we have 1,000 quipus. There’s enough to break the code, and I think this generation might be the one that does it.
Lex Fridman
(01:58:01)
It’s sad that so few have survived. 1,000 is good, but its-
Ed Barnhart
(01:58:07)
But see, Peru has barely scratched the surface with archeology. There’s so much out there. There was a priest I read about named Diego de Porres, who was one of the early people in Peru converting communities. And his chronicle is real clear that he wanted to teach this community of 3,000 people all the Spanish prayers, the important ones for them to be converted into Christianity. And he had the community’s quipucamayocs knot quipus for each person that told them that they could read them out and memorize the prayers. And if they were caught without their quipu in town, they were flogged. So he had 3,000 of the same quipu made and handed out to this community. If we find that community and find its cemetery, there is our Rosetta Stone.
Lex Fridman
(01:59:05)
It is probably the case there is somebody in Peru and maybe a large community that knows this language that understands, and you just have to show up and ask them. And it’s like, they’re like, “Oh, yeah, yeah, yeah.”
Ed Barnhart
(01:59:18)
There are some communities that are using them. There’s a couple of them that we had high hopes for, and then it was apparent that they were just making shit up. They didn’t actually know how to read it. They just knew it used to be read so they made a bunch of stuff about what it says, and they bring it out and they act like they can read it. But then when you ask them the details, they don’t know.

(01:59:37)
But then on a much simpler level, there’s llama herders who keep a string in their pocket and they’ve got the knots equaling how many llamas they have, and then they have subcategories of information like, this one’s sick, we’ve lost these ones, this one’s pregnant. So they have these more simple and more mathematical quipus, but they’re using them to affect as a record.
Lex Fridman
(02:00:04)
Is it possible through archeology to know what the social organization of the Maya was? Maybe if there was a hierarchy, maybe what the political structure was, if there was a leader, different roles, priests, who had the power, who was powerless, who had certain kinds of roles, is it possible to know that?
Ed Barnhart
(02:00:28)
Actually because of hieroglyphs, yeah, we know a whole lot. There’s basic things that archeology, which is a very blunt tool, can figure out like this guy lives in a rich house, this guy lives in a poor house. But the hieroglyphs tell us specific stuff about who can rule, that it was hereditary, that hereditary rule was based on royal blood that could be burned and connect to the ancestors that lived up in the sky versus the one that’s lived in the underworld. It also told us things about hierarchy like that there were councils of lords underneath the king who each represented clans who had their own neighborhoods, and that there were revolving positions of authority.

(02:01:17)
There was the site that I mapped for my dissertation and spent years in the jungle there, Palenque, had a lord’s title named Fire Lord. That was one of the generals of their army. And we could tell that position changed over time. So there was one guy named Chak Suutz’ who was the Fire Lord for the early part of a reign of a king called Ahkal Moʼ Nahb. Then by the time he carves this other panel, there’s another guy in the position of K’ak Ajaw, which was the Fire Lord. And so he had-
Lex Fridman
(02:01:57)
Got promoted or demoted?
Ed Barnhart
(02:01:58)
Well, he could have been killed in the case of that. But then we have the interesting case of in the Postclassic, they shed the idea of kings. They don’t like kings anymore. That’s probably a big part of why the Classic disappearance and the abandonment of all those cities happened. People just got sick of kings. And so they turn into this more council system at Chichen Itza.

(02:02:23)
But then when Chichen Itza falls, there’s a new city that’s architecture looks a lot like Chichen Itza. It’s called Mayapan. But it has what is called the League of Mayapan. And it has a council of representatives from the communities from all around the Yucatan. And it is basically a democracy. It is a Maya democracy that happens. The individuals from all around the Yucatan are there. Each family has their own council house at Mayapan, though they live back at their place. It’s kind of like a Maya Congress.
Lex Fridman
(02:03:03)
Representative of democracy.
Ed Barnhart
(02:03:04)
It really was. And this happens in, I guess, 1250 AD that this Maya democracy happens. And we know the names of them, we know the families. And of course, they were humans, so eventually they screwed it all up. One family murdered another family and the whole city burned.
Lex Fridman
(02:03:27)
And of course, it’s probably some fascinating corruption, which is hard to discover through-
Ed Barnhart
(02:03:32)
Part of it was the Aztecs screwing things up. The Aztecs came down with all sorts of, “We’ll buy everything you’re making.” And then eventually they were like, “Could we maybe buy some humans?” And then one family was like, “No.” And the other family was like, “I don’t know, they’re making us a lot of money.” So then they murdered each other, and the water supply got polluted, and then the city burned.
Lex Fridman
(02:03:55)
It seems like slavery, murder, and disease is a large component of the story of humans. You mentioned different periods in the Maya, the Classic, the Postclassic, the Preclassic, the Archaic. Can you just speak to that? So Archaic is before there was really a civilization?
Ed Barnhart
(02:04:14)
Archaic’s pretty much when everybody’s hunter-gatherers.
Lex Fridman
(02:04:17)
So the Classic period was the golden age. And then the Preclassic is the interesting time that we were talking about. And the Postclassic is when the democracy came about.
Ed Barnhart
(02:04:28)
Well, midway through it. Reverted back to council systems. The Maya loved to be part of councils.

(02:04:34)
So yeah, we have Preclassic is like the origins of civilization. They’re starting to build cities. They’re starting to create their calendar. They’re starting to create these wonderful works of art. And the Classic period, if you look at 10 different textbooks for the Maya, you’ll get 10 different dates that wiggle around in there. But basically that’s the age of kings to me. That’s when these cities decide that they’re going to organize themselves around elite royal families that have this magical blood that can contact their ancestors that are directly in contact with the gods. The Maya never contact their gods directly. They contact their ancestors who are up there who act like liaisons to the gods.

(02:05:22)
And so the Maya age of kings has these dynasties sprouting up where these people have basically snowed the rest of the people, that they’ve got a special quality of their blood and only their offspring can do the same trick and talk to the gods, where everybody, every Joe Maya can let their blood and burn it and contact their ancestor. But Joe Maya’s dad is just a corn farmer who lives down below and he’s got no influence over the gods. But the rulers, their spirits go down briefly, but then they go up into the heavens and reside where the gods are and act as liaisons. So that’s the validation for this kingship that happens for about 400 years.

(02:06:11)
I know we say 250 to 900, which is kind of the encompassing edges of it, but it’s interesting that it’s actually specifically the ninth bakʼtun of their history. The ninth bakʼtun begins in like 426, and it ends in like 829. So it’s a 400-year period of time. And before that, there were no kings. And after that, there really aren’t kings. They’re heads of councils. So I call it the age of kings, where everybody’s following the directives of basically a despot. And for a while, that’s great. Cities build up, populations happening. I see it as kind of a cult of personality moment too. Strong, charismatic leaders inspire people to do great things together.

(02:07:06)
But eventually happens all the time with power, too much power corrupts. All of a sudden there’s this unwieldy huge elite class that has to be treated special by everybody else. And they start saying, “Well, I think we should fight with those guys and you guys should go take these things.” And people eventually get sick of it and they walk away from these cities, and that’s how we get the mysterious Maya collapse where all these cities are just gone.
Lex Fridman
(02:07:35)
That’s one of the great mysteries of the Maya civilization is that over a very short period of time, like a hundred years, it seems to have declined very rapidly. It collapsed. What do you think explains that? What happened?
Ed Barnhart
(02:07:50)
I think it’s a failing of archeology to properly see what was happening. I think that most of those cities populations moved no more than 20 to 40 kilometers out and started their own farm, and they lived in perishable houses. And all archeology signature sees is that nobody lives in the city center anymore. We don’t see a bunch of mass bodies. There’s no evidence of people getting sick. There are certain cities that fought with each other at the end, and we see that signature plain as day. We know when a city was attacked and burned. Mostly that didn’t happen. People moved and migrated.

(02:08:33)
And it seems like right there around between 800 and 900, a lot of the elites that were on top, most of it was in the rainforests of northern Guatemala, they move. They move in two directions. Some of them move into the highlands of Guatemala, and some of them move up into the Yucatan. The city of Chichen Itza becomes the next big capital in Yucatan. But the word Itza is actually a word describing the people who lived around Lake Peten Itza in northern Guatemala. And all of the Maya are super clear about that, that the Itza came in as immigrants with these new ideas and created Chichen Itza. So the elites who were no longer welcome in their cities just moved and set up shops somewhere else.
Lex Fridman
(02:09:31)
So why was there a decline? What was maybe the catalyst? Was there a specific kind of events that started this? Was this an idea that kind of transformed the society?
Ed Barnhart
(02:09:40)
We are still debating that. I don’t think there is a single reason. I think humans are complicated. I think a lot of things led to this. One thing we can see archeologically is that every one of the cities became overpopulated. They were too popular. And we think that they pushed the limits of their capacity to feed and house people. We see it in lots of the cities at the end of the Classic period that people are seasonally starving.

(02:10:12)
I remember really stark evidence in Copán, Honduras. Copán was this beautiful city, lineage of 17 kings. But the last kings and the last elite burials that we dig from the city center, the teeth are the telling part. They get this thing, when you’re growing up and you’re not getting enough food seasonally, it shows up in the enamel of your teeth. It’s called dental hypoplasia. And if somebody’s seasonally starving, it gets these lines in their teeth. And that last generation of Maya before they left Copán, even the rich people are seasonally starving. So there’s a problem there for sure.

(02:10:59)
But I also think, it’s a weird thing, it was not an empire. It was a group of independent city states like Greece. Some of them were allied, some of them were enemies. There was a huge civil war that settled out about the end of the Classic period. So if it was Europe, the victors would’ve taken over, the losers would’ve beat it and gone wherever they went. But when they abandoned these cities that were independent still, they all left both the guys that won and the guys that lost the war. So it couldn’t be just as simple as spoils go to the victor.

(02:11:36)
It’s such a wide area. Not everybody was starving like the people in the Copán Valley. So I personally think it was calendrically timed. It is interesting to note that that ninth period, that ninth 400-year period ends right then. And I think a lot of people, I can’t prove it archeologically, but I think a lot of people said we’re coming to the end of a great cycle and we need to renew. We need to change what we’re doing.

(02:12:06)
When you talk to the Maya today, like at the end of this 2012 thing, if you actually talk to Maya, say, “What happens at the end of a big cycle here?” They say cycles are a time of renewal and transformation, that it is all of our obligation to change our lives at the end of cycles. That change is coming. We can either be part of it or we can get steamrolled by it.

(02:12:32)
The Aztecs did this neat thing called the New Fire Ceremony every 52 years, which was the biggest their calendar would go. They’d burn down perfectly good temples. And they’d burn down their houses sometimes. And they would just, everybody in society would perform this, what they call the New Fire Ceremony, and they would renew the world. So I think my personal theory is that the Maya decided at the end of the ninth bakʼtun that it was time to renew the world.
Lex Fridman
(02:13:05)
I think this theory makes sense because they really internalized the calendar. That was a really big part of their culture, the sense of the cyclical nature of civilization.
Ed Barnhart
(02:13:15)
That’s what I think. I think that they created that calendar to perceive the cycle and to harmonize with it.

Aztecs

Lex Fridman
(02:13:27)
You mentioned the Aztec. What was the origin of the Aztec? Where did these people come from, at what time, and how?
Ed Barnhart
(02:13:36)
Almost every one of the cultures we’re talking about now, we have two different versions of the answer to that question. We have the archeology version, and we have the Aztecs themselves. The Aztecs have this wonderful migration story where they say that they came from a place well to the north called Aztlán. And that they had this migration that went through kind of a hero’s journey where they go to this snake mountain place and they encounter the birth of the war god that they’ll worship after this. And how they stepped into the Valley of Mexico as the last, the lost brothers of everyone in the Valley of Mexico. They said that they all came from the north near Aztlán as a place, a cave with seven different passages called Chicomoztoc. And that all the people who spoke the language Nahuatl came from the cave. And most of them went early to the Valley of Mexico. And in the Aztecs’ story, they were just the lost tribe. They were the last brothers to come in.

(02:14:51)
But then they show up late game, and they become mercenaries. They just start working for communities in the Valley of Mexico. And this takes place in the 1300s. So about 200 years before Cortez shows up, the Aztecs show up to the Valley of Mexico. And they make themselves this indispensable group of mercenaries. They do the dirty work. All the civilized communities around Lake Texcoco, which is now Mexico City, it’s all dried up, but those guys were too civilized to fight with each other. But they could hire the Aztecs to do their dirty stuff. So the Aztecs did that and really changed the politics in the game of the Valley of Mexico.
Lex Fridman
(02:15:43)
The dirty stuff. They were the muscle.
Ed Barnhart
(02:15:46)
Yeah. They’d go in and they’d kill whoever you wanted killed, and now you’re the king of this area.

(02:15:52)
So one of these kings that they were working for really liked them and decided, I’m going to make the Aztecs part of our ancestry. I’m going to give them my daughter to marry the head of the Aztecs. And the Aztecs sacrificed her. And that really pissed that guy off. So he took his whole army and ran the Aztecs out for a while. They say they live in this horrible desert section eating lizards.

(02:16:22)
But then one of their priests say, “We’re going to walk around the lake, and my visions say that where we see an eagle sitting on a cactus with a snake in its mouth is where we will build our capital.” And they see that, but it’s out on an island in the lake. And he said, “Well, I don’t know, that’s the place.” So they build up an island, they go to that island, and then they just start piling up lake muck until they make a whole city there in the middle of the lake. They make an island city. And all of this occurs in about a hundred years. So they show up about 1300. The capital of Tenochtitlan, as they called it, is really established. And from there, they quickly take over the entire valley. They make what they call the Triple Alliance, which is the two other big communities of the lake are now their allies, but they’re not really allies. The Aztecs were brutal. Those guys agreed to shut up and let the Aztecs run the show. And then the Aztecs spread like a wildfire all the way down into the Maya area. Everywhere they go, they rename everybody’s towns and make them pay tribute.
Lex Fridman
(02:17:42)
Pretty short lasting civilization. Spread extremely quickly. Famous. What are some defining qualities that explain that?
Ed Barnhart
(02:17:53)
I think they were very much like they had an attitude like Attila the Hun. They just had no problem ripping your skin off. Everybody else had become too comfortable and too civilized. And the Aztecs were just mercenary. They told everybody, “We can either rip your heart out or you can work for us. And if you work for us, you’ll be just fine.” They’d go to every town they’d go to.

(02:18:20)
The first thing they’d do is they’d show up with a bunch of merchants. There was a merchant class who were also military. They were really the people who assessed where they were going to attack next. They’d go in with a bunch of Aztec products and say, “We’d like to trade with you.” But all the time, they were assessing their military prowess, what products they had that they could take. And then soon after the pochteca were there would come the military with the reconnaissance.
Lex Fridman
(02:18:51)
So the Aztec had a huge warrior class, as you’re saying. So can you linger on their whole relationship with war and violence?
Ed Barnhart
(02:19:02)
They worshiped a war deity. Their main temple was the Templo Mayor. It had two temples up on top. One was Tlaloc the Rain God, who liked a lot of sacrifice himself. But then the other one was Huitzilopochtli. That translates “The hummingbird on the left.” But he’s the war god. I love that he’s a hummingbird. Maybe he’s fast and he comes from the magical side or something.

(02:19:32)
But then right next to the temple, on either side were the two temples of the warriors. One was the Eagle Warrior clan, the other one was the Jaguar Warrior clan. And they were symbolically in competition with each other, though a unified force. I guess probably an analogy between the Navy and the Air Force. They had a good-natured competition of who was better, but they were the same force. So those were their symbolic warriors.
Ed Barnhart
(02:20:00)
Force. So those were their symbolic warriors dressed up in all of their finery, and they would come at people with these two forces, and it was very unlike anything that had happened before in Mesoamerica. Again, I think I could draw a parallel to what happened in Europe. The famous Henry V moment in Agincourt where his kind of ragtag army wipes out half of France’s aristocracy with the Longbow. Up until that moment, Europe had a very war is for the elite classes kind of attitude. And then after France lost half their aristocracy, then it was like, maybe we should be hiring from the villages.

(02:20:50)
The same sort of thing happened with the Aztec that there was, Mesoamerica really didn’t have huge standing armies, but the Aztec put this army together and they intimidated people. They didn’t actually have to use it a lot. It was used to great effect in the valley of Mexico and for the rest of Mesoamerica it was mostly the fear factor.
Lex Fridman
(02:21:14)
But there also seemed to be a celebration of violence. I think you said that beauty and blood went hand in hand for the Aztec, maybe like the Roman Empire, was it, they just had maybe a different relationship with violence, where that stood in the purpose of life, purpose of existence. Is that fair to say?
Ed Barnhart
(02:21:41)
I would hypothesize so. I mean, I think it’s one of the wonderful things about studying these ancient cultures, knowing what our human capacity is and the Aztecs, when I said that statement, what I meant by that is they were absolutely comfortable with human sacrifice and ripping people’s hearts out.

(02:22:04)
They had this just grotesque, violent bent, but in the same way, they also absolutely loved flower gardens and poetry and music and dance. The same Aztec king who would order the hearts of a thousand people extracted also would stand up at dinner parties to recite his own poetry or the poetry of famous statesmen that had come before him. And they spent money on things like flower gardens. All of the causeways leading to the Aztec capitol had beautiful flower gardens and they had a museum and they had an aquarium and a zoo, and they had an opera and they had a ballet. And these things existed together. There was not, in the Aztec mind, any conflict between witnessing someone’s heart getting ripped out one moment, and in the evening we’d go to the ballet.
Lex Fridman
(02:23:12)
How does that contrast the relationship with war and violence with the other civilizations of Mesoamerica and South America, maybe the Maya? What was their relationship like with war?
Ed Barnhart
(02:23:23)
The Maya were certainly influenced by the Aztec at the end, so we get a skewed perspective from the contact period accounts because the Maya were much more violent and sacrifice-oriented in their post-classic rendition. But in the classic period, it was mostly the priests and the king who were doing the sacrificing of themselves that we know that the Maya kings would cut their penises and then bleed that blood onto paper and the paper would burn and become the smoke through which they’d commune with their ancestors.

(02:24:06)
But they’d actually tie this paper onto their penis, cut it, and then dance. So the blood splattered, but it was them cutting themselves. It was different than killing a bunch of other people for it. It was a auto-sacrifice, we call it. Still very macabre, but very different than deciding a whole bunch of other people should die. It was a self-sacrifice thing.
Lex Fridman
(02:24:30)
Can you speak to the sacrifice a bit more? Animal sacrifice, human sacrifice. What role did that play for the Maya, for the Aztec, for the different cultures here. Was that religious in nature?
Ed Barnhart
(02:24:42)
It was absolutely religious in nature, and the Aztecs were of the opinion that the war God demanded people were captured and sacrificed and it had to be valuable people. There was a lot of… before they made that big standing army, they had just ritual battles that they would have and they’d take captives. In fact, all around Mesoamerica, they wanted captives so that they could bring them back and sacrifice them for the gods and the Aztecs deciding to specifically follow the war God, did this more than anybody. They did it so much and so successfully that they didn’t have any enemies nearby.

(02:25:27)
So they decided this one poor sucker group, not that far away, called the Tlaxcallans, that they were never going to make peace with them so that they could go close by every year and just have a little symbolic war with the Tlaxcallans and haul them back for a sacrifice. Cortes met those guys and he was like, here are people who hate their guts. I’ll just use these guys. So we say, oh, Cortes took over the Aztec world. It was Cortes and 20,000 super pissed-off, Tlaxcallans.
Lex Fridman
(02:26:04)
And the actual sacrifice, so there would be kind of these ritual battles or is it chopping off people’s heads? Like, is there some interesting rituals around the sacrifice?
Ed Barnhart
(02:26:15)
It’s mostly heart extraction, sometimes heads, but they bring them up on top of the temple so everybody can see it. And they had a specific stone where they would bend them over so their rib cage would come out and they’d use a thick obsidian knife, and they had a really, just, tried and true way to do it. They’d stab it in in a certain place close, and then they’d push down on the sternum as they ripped up on the rib cage. So they’d just make a place where they could just rip it right out.
Lex Fridman
(02:26:47)
With their hand?
Ed Barnhart
(02:26:47)
Yeah, with their hand. But they were really just surgical about it. They’d use a thick obsidian knife where they could just break the ribs right along the sternum and then push the sternum down, pull up and just [inaudible 02:27:00].
Lex Fridman
(02:27:00)
While the person was alive?
Ed Barnhart
(02:27:02)
Yep. While the person was alive. And the Aztecs had this idea, there was a horrible drought that went on that almost ruined the entire valley, and they came to this conclusion that it’s because we haven’t been killing enough people. We’ve got to bump this up. And then when they did and they decided, they really took it out on the Tlaxcallans, it rained again. So it was proof positive that they should just keep doing that. And they ate people as well. They really did.
Lex Fridman
(02:27:32)
As part of the sacrifice or?
Ed Barnhart
(02:27:35)
After the sacrifice, then they would eat them. And this was part of the drought and the famine thing that started, but then it was just kind of the thing to do when Cortes got there, they were still having certain special feasts that involved humans and it really upset the Spanish that they would be tricked into eating human. Like, “Hey, you’re liking dinner? That was a human.”
Lex Fridman
(02:28:00)
So the idea, was it actually having a taste for human flesh or is it just these kinds of ideas of if you eat a person’s heart that you can get their spirit and their strength?
Ed Barnhart
(02:28:14)
In the case of the Aztecs, it seemed like they just liked it. This guy, Sahagun, who was a very responsible chronicler, that was pretty specific, that there was a distribution thing. The elites got butts. The butts were the best part, so the butt cheeks, those are the best parts to eat. And then it went down the chain until some people just got fingers and toes.
Lex Fridman
(02:28:40)
Literally bought taste for the Aztec. Boy. All right.
Ed Barnhart
(02:28:45)
They really did. They really did. In fact, that’s what caused the, have you heard of the Noche Triste? The sad night? The night that the Aztecs really go nuts on the Spanish and kick them out. It’s all triggered by this one guy, Pedro de Alvarado, who’s left in charge by Cortes. As Cortes goes to the coast and tries to talk to the New Force, talk him into being for him, which he does.

(02:29:14)
But Pedro Alvarado is left back in town in charge and they’re doing another one of these huge Aztec buffets and parties to honor them. And it happens. The guy says, “Hey, do you like dinner?” Like, oh yeah, it’s a nice dinner. “Well, it’s humans. You’re eating humans. See, I told you they were good.” And Alvarado just freaks out and he has the guards close the doors and he murders everyone in the party. Women, children, nobody has weapons. He just murders everyone.

(02:29:49)
And that’s what spazzes the Aztecs out to eventually murder Montezuma who was their captive and then try to murder all of them. And it was all Pedro Alvarado’s fault for freaking out about eating humans.
Lex Fridman
(02:30:05)
Just a little practical joke.
Ed Barnhart
(02:30:06)
Yeah. It was just, they thought it was funny. He did not.
Lex Fridman
(02:30:09)
That’s fascinating. I didn’t realize. So I kind of assume that some level of cannibalism would have to do with eating the heart to gain the spirit of the person or something like this, but.
Ed Barnhart
(02:30:19)
In certain deer hunting rituals, things for sure. But the Aztecs, no, they just liked eating humans. It was part of the fear factor too. I mean, they could walk into a new town and be like, you guys could either send us a number of quetzal feathers every month or we could eat you.
Lex Fridman
(02:30:36)
So that’s psychological warfare and actual warfare. It worked and that’s how they spread quickly.
Ed Barnhart
(02:30:42)
And they were just about to take over the Maya when the Spanish came and messed everything up, they had the Maya surrounded and they were about to take over the whole Yucatan.

Inca Empire

Lex Fridman
(02:30:52)
So you think without the Spanish, there would be this Aztec empire that would last for a very long time.
Ed Barnhart
(02:30:59)
I think there would’ve been an Aztec empire. I think they would’ve finished dominating everybody, but they did it through hate and everybody hated the Aztecs.
Lex Fridman
(02:31:09)
[inaudible 02:31:09].
Ed Barnhart
(02:31:09)
So it wouldn’t have lasted forever. They were not ruling justly. They were ruling by force. And that can only go on so long before revolution happens. The Inca Empire, I think that would’ve gone on forever. Because they were really community oriented. Once the Inca took over, no one in the Inca Empire starved, they built architecture. Everyone was safe. It was the society that could have lasted a long time.
Lex Fridman
(02:31:37)
What was the origin of the Inca Empire?
Ed Barnhart
(02:31:41)
Well, it was bloody at first. Like most of them are, but once they started taking over, what they did is they Empire built. Everybody else had just raided their neighbors to get the resources, but everybody they raided, they turned them into the Inca Empire and they created this incredible Mit’a system where you took turns working and they created the road system so they could get groups of workers back and forth. So a town of let’s say 5,000 people, the Inca would roll up with an army of a hundred, 200,000 people and say, would you guys like to be part of the empire? Or would you like us to escort you to the edge of the empire?

(02:32:25)
And if your mayor here agrees, then he can have a town. He can have a house in Cusco. But then the very next month, a big work crew would show up and they’d start building agricultural terraces and storage units. And every month with the agricultural excess, they would have big parties and everybody would eat. So people lived well in the Inca Empire. It was a rough beginning, but everybody who agreed to be part of it immediately had access to a whole bunch of resources and security they never had.
Lex Fridman
(02:33:01)
So they started in South America and Peru and Cusco. Cusco was the center of it.
Ed Barnhart
(02:33:07)
Cusco in their language, Quechua, it means navel or belly button, and it’s up in the mountains, but there’s four quarters that they called their empire Tawantinsuyu, the land of four quarters. And the center of those four quarters was Cusco.
Lex Fridman
(02:33:25)
It sprung to life in 1200 A.D.C.
Ed Barnhart
(02:33:30)
We backwards project what it was, but it was probably mid-twelve hundreds when the first Sapa Inca, the first ruler came in, but it was the, I think it’s the ninth one, [inaudible 02:33:45] Pachacuti who really started being an empire builder.
Lex Fridman
(02:33:50)
Part of that, what really defined the empire, as you said, roads, they build a massive road network.
Ed Barnhart
(02:33:58)
Roads, and in the same way that the Roman strategy of building roads and infrastructure, and then every place they took over, they’d create certain key pieces of Roman architecture that kind of made that city Roman and they’d rename it something. The Inca did the same thing. They had certain signature Inca architecture that they would build in as the administrative part.

(02:34:27)
They’d send the Khipukamayuq, the guys who would weave or knot the khipus as accountants, and they would go through and say what everybody did. Okay, you’re a good farmer. You’re going to farm. You are a good weaver. You’re going to weave. All the men here are going to take a turn at being part of the army. And then they sent independent Khipukamayuqs too. Every community had five or six that were not allowed to work with each other, and they all had to independently send their Khipus back to Cusco. And if there were accounting discrepancies that were called to Cusco to figure out who was lying about what.
Lex Fridman
(02:35:07)
So there’s a super sophisticated record-keeping system.
Ed Barnhart
(02:35:10)
Yeah. And that was the Khipu and the Spanish recorded what they could and then burned them all.
Lex Fridman
(02:35:17)
But that’s an interesting development for an empire because that allows you to really expand and have some kind of management, some level of control.
Ed Barnhart
(02:35:27)
They couldn’t, at the end, they were at least 10 million people and there was just no way to do that without some sort of sophisticated record-keeping system.
Lex Fridman
(02:35:37)
If the Inca had to face Aztec, who wins?
Ed Barnhart
(02:35:40)
Inca.
Lex Fridman
(02:35:41)
Inca.
Ed Barnhart
(02:35:41)
I mean, the Aztecs were psychotic, but the Inca had just reserves for miles and they had that essential hearts and minds. There was only one thing that everybody got pissed off about when they joined the Inca Empire. For some reason, everything was owned communally except the llamas. The llamas were the kings. And so that was the one thing that some of them would stay in town just to be work llamas, but you don’t own your llama anymore. And people are really attached to their llamas. To this day they are like family members. So it’d be like everybody walked in and said, everybody’s family dog is now mine. [inaudible 02:36:23] really upset people on an emotional level.
Lex Fridman
(02:36:25)
Well, I mean, so llamas got domesticated at some point, probably. I don’t even know when, but early on.
Ed Barnhart
(02:36:35)
We have rock art that progresses to make it seem like a progression from people depicted hunting them to people depicted standing next to pregnant ones. So it was still in that archaic period at least that they became friends.
Lex Fridman
(02:36:52)
But if you roll in and you own them, that’s?
Ed Barnhart
(02:36:55)
Yeah, that pissed everybody off. For some reason, the Inca owned everybody’s llama instantly, and he would take anything he wanted. A lot of them would just get carted away that day, just sent to Cusco. And they’d also take their mummies. That was a weird thing. Everybody mourns, they’re dead, but the Inca just ceased to accept it. They would just, the mummies were still there. Okay, he’s dead, but look, he’s still got clothes. He’s at the party. Let’s put a beer in front of him. They just kept people as mummies. And so the ancestral mummies of every town, part of being absorbed into the empire was, okay, your most important mummies are now going to have their own beautiful house in Cusco, but they would physically bring those mummies to Cusco to make now Cusco the spiritual heart of their belief system.
Lex Fridman
(02:37:52)
I mean, I could see how that would piss people off, but it’s also a pretty powerful way to say, the ancestors that you idolize, that you respect are now in the capitol.
Ed Barnhart
(02:38:03)
They’ve been elevated. We didn’t steal them. We have given them a new place of honor, and you’re welcome to come visit them all the time. And they did. They have these festivals where everyone from all corners of the Inca world would come to Cusco.
Lex Fridman
(02:38:18)
And which of the civilizations mummified people?
Ed Barnhart
(02:38:22)
The Incas for sure mummified people and even did some of that kind of Egyptian- esque taking out of organs and preparing the body. They put straw inside the cavity and mummify them, but the Maya didn’t do it at all. The Maya, in fact, on purpose would flood tombs with water so that the skin would float off the skeletons faster, and then they’d get back in there. It was jungly. So I think the bugs probably had part of it too. But then they would get back in there to get the bones. They’d open it back up and take the bones out and paint them with red Cinnabar, the one that I was in, in Copan, we had evidence that they had gone in there four different times, and the last couple times they only took the skull out and repainted it and then put it back in articulated on the skeleton. But they didn’t mummify. They on purpose would grossly float the bodies so they could get the skin off faster and get to the bones.
Lex Fridman
(02:39:30)
But would they keep the bones?
Ed Barnhart
(02:39:31)
Yeah, they’d keep the bones and they’d pull the bones out occasionally and do rituals to them or commune with them and then put them back in.
Lex Fridman
(02:39:39)
So there’s still a deep connection to the ancestors through the physical manifestation of the ancestors then, whether mummified or bone.
Ed Barnhart
(02:39:47)
And to this day, if you do an excavation here in the United States, Native American people don’t like it. They don’t like their graves, which is fine enough. I wouldn’t want somebody digging up my grandma either. But the Maya, they love it.
Lex Fridman
(02:40:01)
They love it.
Ed Barnhart
(02:40:02)
And every Maya person, if we find a grave, they’re like, yeah, look at that. Bones, cool. Can I touch? They’re not spooked about it at all. They think it’s exciting. I, one time, helped out a physical anthropologist in town in Copan to get a osteology collection together of various animals. So if we got bones from an excavation, we could see what kind of animal it was based on the collection. And this family said, well, our family dog died last year and we buried him in the backyard. You could go dig him up. And so we were like, okay, yeah, I mean, we do need a dog.

(02:40:44)
We’ll go dig up your dog. And they were like, but the kids really want to help you. So their kids came out and this was like their puppy, and it died less than a year ago. When we got to it, one of them just grabbed up a bone and he was like, [inaudible 02:40:59] like little bitty bones. Yay. What a weird attitude. That’s your dead dog there. But they have a different relationship with the dead.
Lex Fridman
(02:41:08)
In some sense that’s a beautiful attitude, right?
Ed Barnhart
(02:41:10)
Yeah.
Lex Fridman
(02:41:11)
Why pretend like we’re not mortal and this is just the process of it. And as you say it now, it kind of will be cool.
Ed Barnhart
(02:41:21)
That’s what Day of the Dead is all about. And I love Day of the Dead. Halloween’s this creepy thing where they’re all monsters, but Day of the Dead is this beautiful time where we remember our ancestors. I convinced my kids after the movie Coco came out. Now we have an altar with all of our great-grandparents on the altar, and we talk about who they were and how they lived, and we put things on the altar that mattered in their life, and we remember them on that day and it turned something that was a weird eat too much candy and wear a monster mask thing into something beautiful where we discuss where we came from.
Lex Fridman
(02:41:57)
I have to ask about the giant stones the Inca has been able to somehow move and fit together perfectly. Do you understand? Is it understood how they were able to do that so well?
Ed Barnhart
(02:42:13)
No. The moving of it, I think that we have reasonable theories. There are ways to pivot large weights. There’s a great guy named Wally Wallington, a retired contractor here in the US who built Stonehenge in his backyard in Minnesota, single-handedly showing how you can move big stones. So I think Wally’s already figured out how to move them. It’s the perfectly fit so carefully fit together that you couldn’t even put a dime in between the stones. That’s the one that I think still has people baffled. The common archeological wisdom that you’d find out of a textbook is that they just kept pecking away at it with hammer stones and setting them and resetting them until they were perfect, which has to be bullshit, that there is no way that they just were that meticulous. I mean, everybody’s got a hammerstone. I personally think it’s acids.

(02:43:23)
I think they melted them together. And there are weird places when you really look at closely to these stones, which I’ve done a number of times. I’m going back next month to Machu Picchu and especially Cusco. I walk around in the alleys where these 500 to a thousand-year-old walls are still there. And I see things like the crystals in the andesite are almost stitched together along the seams. The andesite around it is melted and the crystals haven’t. And there are other places where there are weird wipes on the wall. It’s just melted. Like somebody took a rag and wiped it while it was soft. Lots of talk about soft stones turning hard too. I haven’t been able to prove it. This is one of these end of my archeological career chapters. I’m either going to prove myself wrong or prove it, but I think they used acids. My dad’s a chemist and he told me a long time ago that there’s no way, there’s no naturally occurring acids. But my current theory, actually, I got the idea initially from the show Breaking Bad.

(02:44:44)
I don’t know if you ever saw that show, but there’s a point in which they’re trying to dissolve a body and they’re using hydrofluoric acid and it goes right through the ceiling. That hydrofluoric acid is so fascinating. It won’t go through plastic, and you can also bring it in inert parts and then combine it. The Inca made tons of jewelry out of fluorite. Fluorite is big in the Andes, and they also mined a lot of things for gold and silver. And the byproduct of that mining is sulfuric acid.

(02:45:23)
You put sulfuric acid and fluorite together and it’s hydrofluoric acid, and that will burn through andesite or anything. And if you learned how to do it judiciously and you didn’t care whether servants lost an arm or two, then you could actually use them to fuse these together. And I think they’re fused together. I asked the city of Cusco if I could take some core samples, and they said, go away, gringo. Don’t touch our walls. So actually this next time I’m going to go try to talk to the more Quechua authorities in a place called Ollantaytambo and maybe I can convince them, but right now, they just think I’m a weird-ass gringo who wants to put holes in their walls.
Lex Fridman
(02:46:15)
That’s a fascinating theory. And so how could you get to the bottom of that? So getting core samples to see if there’s some kind of trace.
Ed Barnhart
(02:46:24)
Chemists I’m working with say that if there was hydrofluoric acid in between these, that a core sample right along the seam, they can separate out the elements in there and detect whether there was actually elements of hydrofluoric acid. I wanted to go straight to burning rocks, but they were like, no, I mean we already know that’s true. I mean, yeah, we can burn some rocks, but it would happen. And that’s just chemistry. We got to prove that it would happen in the walls. So go get us samples. And that was before COVID and all sorts. You know how it is, you probably are the same guy where you’ve got a thousand ideas and the ones that are fruitful, you run with and the other ones you’ll get back to.
Lex Fridman
(02:47:07)
That’d be fascinating if true, and I hope you do show that it’s true or follow, either one.
Ed Barnhart
(02:47:13)
I’ll try to disprove it.
Lex Fridman
(02:47:14)
Disprove it. Yeah. I wonder if we discount how much amazing stuff a collection of humans can do, because it just feels like if a large number of humans are just working a little bit chipping away at stuff. At scale, they can do miraculous things. So the question is, how can a large number of humans be motivated to do a thing? When we think about Stonehenge, some very challenging architectural construction, we don’t think about a large number of humans working together.
Ed Barnhart
(02:47:52)
Well, that large number of humans are motivated to work together by a small number of administrators who are dynamic and convincing in some way or another.
Lex Fridman
(02:47:52)
Right.
Ed Barnhart
(02:48:04)
One of my favorite quotes is, and I’m probably going to misquote it here, but I think it’s Margaret Mead who said, never underestimate the power of small groups working together. And the truth is that those are the only people that have ever changed the world. That small dedicated groups of people are what changed the world, and they inspire big groups of people to embrace their vision.
Lex Fridman
(02:48:31)
Yeah, I think we sometimes underestimate how much humans can do across time and across scale.
Ed Barnhart
(02:48:37)
And we are way less capable than we used to be. I mean, the average human had all sorts of skills that at least I personally do not. I’m wearing a shirt, but I can’t make a shirt. That’s for somebody else to do.

Early humans in North America

Lex Fridman
(02:48:53)
You’ve also lectured, which I really enjoyed, about North America. And also helped teach me that there was a lot more complex societies going on here for a long period of time. So maybe can we start at the beginning? Who were the early humans in North America?
Ed Barnhart
(02:49:15)
Well, we go through that paleo Indian and archaic period for thousands of years. As we started this conversation, probably 30,000 years is a conservative now, humans first entered the Americas, but the first cultures we get here are mound-builders around the Mississippi and to the east, and then also a totally separate group in what we call the American Southwest now, the four Corners, who will develop into mostly the people we call the Pueblo people who are still there today, like Zuni and Hopi people.

(02:49:54)
So we’ve got these two clusters. The very first major community in North America is in the most unlikely place. It’s in Northern Louisiana. People think I’m crazy when I say this, but there is a pyramid in Northern Louisiana, a big one at a site called Poverty Point that is 3,500 years old. So it’s the same age as the pyramids in Egypt, and it is a giant thing just poking out of the bayous of Louisiana. And people don’t believe me when I say it, but it’s there.
Lex Fridman
(02:50:34)
The Mound Builders, what was that society like in comparison to everything else we’ve been talking about in Mesoamerica [inaudible 02:50:41].
Ed Barnhart
(02:50:40)
They evolved over thousands of years. We call them Mound Builders. This is something I object to. I think we should have a better… We do. The last version of them, we call them Mississippians now. But generally speaking, we call all these guys Mound Builders, but what they built were pyramids. They look like mounds now, and they didn’t build them out of stone. That’s kind of our just inherent western bias. Something that’s built out of stone is sophisticated, and something that’s built out of dirt is rudimentary.

(02:51:14)
But in their full living form, they did have cores of dirt, but then they also had kind of clay caps. So they had terraces. They had whole complexes of buildings up on top. There were kings that lived up there. There’s the biggest of the Mississippian cities is called Cahokia, and it’s right outside of St. Louis.

(02:51:40)
And it was huge. It had a population of 20,000 people and pyramids all over the place, a huge palisade wall around it. It was absolutely gigantic, a thriving metropolis. And we in America have kind of a collective amnesia. We never hear about these massive civilizations. Cahokia was the big first city, but then it spread from the Mississippi all the way to the Atlantic. There were hundreds and hundreds of these big cities that had five to 10,000 people each.
Lex Fridman
(02:52:19)
Were they their own thing or was there some kind of thread connecting all of them.
Ed Barnhart
(02:52:23)
They had a unified religion and culture. They were, again, not an empire. So they were warring city-states. There were kind of territories that were owned by big kings, and then the cities around them were kind of the subsidiary lords and kings. And then one kingdom could either ally with a neighbor or have a fight. So they were kind of countries, I think for, yeah, we could safely say they were different countries within this patchwork that was Eastern United States. And it’s so weird that we don’t know this because it was clearly documented by the Spanish.

(02:53:09)
I’m not talking about just archeology. We find him in archeology now. But Hernando de Soto landed in Florida and went for three years from, he went up into the Carolinas and over down into Alabama and Louisiana, and he’s the first one to see the Mississippi up there. But for three years he went through city after city after city, unfortunately decimating them, eating all their corn, giving them diseases. But the documentation’s clearly there. He met collectively, millions of people in a very sophisticated and uniform civilization.
Lex Fridman
(02:53:51)
So it’s disease and stealing of resources. But was there explicit murdering going on?
Ed Barnhart
(02:54:00)
Unfortunately, yeah. He was a murderer and a psycho and a liar. He snowed them that he was some kind of deity. Actually learned a trick from the Inca who he was with Pizarro in his first run and went back to Spain, was rich, had a wife, a castle. Then he got bored and he decided to have a reign of terror on Northern America for three years. But he had people burned at the stake. He had his dogs rip them apart. He was very, very brutal. He ruled that area through fear and had absolutely no respect for anybody. He made promises and broke them all the time. He was really a brutal man.

Columbus

Lex Fridman
(02:54:50)
So this whole period when Christopher Columbus came, how did that change everything?
Ed Barnhart
(02:54:58)
Well, there’s a great anthropological body of literature.
Ed Barnhart
(02:55:00)
Anthropological body of literature. It’s called the Columbian Exchange based on Columbus. But it’s all this trade back and forth between the new world and the old world. And the old world got just wonderful stuff. All of a sudden their diet didn’t suck. All these vegetables came in. The new world got herd animals. It got pigs and cows and goats that it didn’t have, but it also got 13 infectious diseases. Europe had had wave after wave and kind of had herd immunity on a lot of things, but it didn’t actually go away. It just couldn’t spread like a wildfire through the community. So when they arrived to the Americas, all of a sudden these just a pile of horrible diseases hit people. I think in the first 20, 30 years, there were people who had contracted multiple deadly diseases at once and died of them.

(02:56:03)
But the numbers, it’s a shameful part of history, and it wasn’t something that Europe perpetrated on them. Medical science at that time was still the four humors theory, that people were made of yellow bile, black bile, blood, and phlegm. And we did things like, well, you’ve got to bleed him. He’ll feel better then. So we had no idea what an infectious disease was, but the reality was that this horde of diseases hit everyone. And the numbers are now saying in the first 50 years that 90% of everybody was dead, and that the number of people has increased as well as far as our estimates. We’re thinking it’s somewhere around 150 million people and 90% of them died. And with them, all their knowledge. Just, I mean, imagine the moment where who dies when things get bad? It’s the young and the old. So all the knowledge keepers die suddenly.

(02:57:07)
The children die. This next generation that’s half taught and now completely demoralized thinking that this is a spiritual attack, that their gods hate them, that the only way out of it is to accept this new Christianity. But they don’t want to have to bring kids into this world where everybody’s dying. And even if they do, they can’t teach them what the old people were going to teach them because the old people are gone and didn’t finish the transmission. So in a single terrible moment in human history, the generation loses all their knowledge. So a lot of the things these people knew just blipped out.
Lex Fridman
(02:57:50)
But with that also, just the wisdom of the entire civilizations-
Ed Barnhart
(02:57:58)
So much of-
Lex Fridman
(02:57:59)
… fades away.
Ed Barnhart
(02:58:00)
… what they knew was just lost at that moment. We have the Maya who had those hieroglyphs and that we’ve learned a lot from that.
Lex Fridman
(02:58:07)
Yeah. But not a significant integration of that wisdom into. So it wasn’t when the Europeans came, it wasn’t like the cultures were integrated. It was a story of domination. Of erasure, essentially.
Ed Barnhart
(02:58:22)
In North America, there’s a new term in the literature that I like. We call it the Mississippian Shatter zone. That Mississippian civilization was millions of people, but they got spread out all over the place over the next centuries. And now we have this Shatter zone where we have ruins, and the people that were actually from those ruins are somewhere else on a reservation far away. And I’m just about to talk to a Cherokee man who listened to some of the things I had to say and says, “All those Ho-Chunk things you were saying from that Ho-Chunk culture, my grandparents talk about this sort of thing too. Can I talk to you by phone and tell you about these things?” So we’ve got this Shatter zone where we’re going to try to put the puzzle back together, especially in terms of Mississippian religion. I really think we’re making headway in this generation, and it’s exciting to be part of piecing this old religion and its mythology back together.

Vikings

Lex Fridman
(02:59:25)
Just as since a lot of people refer to Christopher Columbus as the person who discovered America, I read that the Vikings reached North America much earlier in 1000 C.E. And why do you think they didn’t expand and colonize?
Ed Barnhart
(02:59:44)
Because they got their ass kicked.
Lex Fridman
(02:59:47)
Okay. Simple.
Ed Barnhart
(02:59:48)
It’s the truth. It is absolutely true that the Vikings were here. There’s a great site in Nova Scotia called L’Anse aux Meadows, which definitely has what’s left of a Viking colony. It was Leif Eric and his father Eric the Red, who they got kind of kicked out of Europe because they apparently couldn’t stop murdering people. And so they went to Greenland and then kind of island hopped over to Canada. But I think the culture that was in that area was named the Dorset, but they would have nothing to do with the Vikings.

(03:00:22)
They attacked the Viking settlement every day and did not give them an inch until they decided it was just worthless and they left it. The Vikings attacked Ireland, and they just found a bunch of monasteries full of gold with a bunch of guys going, “We’re men of God, we don’t fight.” And the Vikings were like, “This is great. That’s great. This will be easy, then. We’ll just loot all these Easter eggs.” But the Native Americans in Canada were not having it. They kicked their ass. In fact, Leif Erickson’s brother Thor died there. The natives killed him. He was supposed to be in charge of expanding the settlement, but they just killed him.
Lex Fridman
(03:01:04)
So a lot of the Native American cultures were also, I mean, they’re sophisticated, warring cultures also.
Ed Barnhart
(03:01:11)
Yes, they fought. Especially the Mississippians. Boy, they were tough. And so were the five nations. The Mohawk, the Huron, the ones that kicked the Vikings’ ass up there, they were probably Algonquin speakers. But they were connected just above the Great Lakes, but they were all very tough people.
Lex Fridman
(03:01:35)
When you think about the Spaniards and the Portuguese and the over a hundred million people that were killed, do you see that as a tragedy of history or is it just the way of history?
Ed Barnhart
(03:01:49)
I think that the epidemics, I consider it a tragedy. That did not have to happen, and that was not a fair fight. Nobody knew what to do about it. There was just a tragic, perfect storm of events. I think that the Spanish and the Portuguese get unfairly maligned in what’s been called the Black Legend, that they just marched into America and murdered everyone. That’s not the fact. It was the diseases that murdered everyone.

(03:02:20)
In fact, there was a really poignant story I read of a Spanish priest in the Amazon, in the Brazilian northern part of the Amazon where he made this utopian community and he was bringing people in that were getting sick, and he wrote, “I’m baptizing everyone. I have baptized 10,000 people a day, and yet God’s still killing them. Why is he doing this to them? They’re doing everything that I ask them to do. They are submitting to the will of God.” But this guy doesn’t realize that the same bowl of holy water that he’s baptizing them in, he’s just wiping the disease on everybody’s faces. He’s accelerating it when he doesn’t even realize. He thinks he’s saving them, but he’s actually killing them. That’s a tragedy. That’s not just like spoils go to the victor stuff. That’s just straight up tragedy.
Lex Fridman
(03:03:19)
Yeah, yeah. But that one is hard to know what to do with, like Black Death. I mean infections, they don’t operate on normal human terms, right? They just go through entire populations. Back to wild ideas.

Aliens

Ed Barnhart
(03:03:37)
All right, just my style.
Lex Fridman
(03:03:41)
I mean we didn’t really talk about how life originated on Earth or how humans have evolved, and we did talk about that there could be just a lot of stuff in ancient history we haven’t even uncovered yet. Do you think it’s possible that other intelligent civilizations from outside of earth, aliens ever visited?
Ed Barnhart
(03:04:07)
You had me right until the ever visited thing. That one I’m not entirely sure about. I’m not sure whether we have any… We certainly have no archaeological proof that I would cite or contemplate as the evidence of such. But the guys that discovered DNA, Watson and Crick, Watson who actually habitually used hallucinogens to invigorate his thinking, he said that he thought that DNA on this planet was way too complex to have developed over the time period that it had at its disposal. And that his guess was that our DNA was somehow seeded from outside of our planet. And take that for what it is. But the guy who we respect on many other levels also said that. So that’s interesting. But in terms of aliens visiting us, I don’t know. It does smack of a kind of human hubris that we think we’re important enough for some advanced species to give a shit about us.

(03:05:19)
Statistically speaking, the universe is way too big. We can’t be the only sentient beings. There’s got to be somebody else out there. Whether they care about us, that’s a question. I’ve been on Ancient Aliens a number of times. I show up and I’m an educator. I mean, refusing to be part of the conversation is an immediate fail in my book. But there was one time where they asked me at the end, “Do you have anything else do you want to say?” And I said, “Well, y’all’s premise is that aliens came down a long time ago and they gave humanity these wonderful gifts of science and medicine, engineering, all these things. Today we also have a lot of stories of the aliens coming down, but now all they’re doing is mutilating cows and sodomizing rednecks.” Like whatever we did, we super pissed them off apparently.”
Lex Fridman
(03:06:18)
The quality of the gifts has decreased rapidly. It’s an interesting thought you’ve mentioned. What archeologically would you have to see to be like, this might be an alien?
Ed Barnhart
(03:06:33)
A technology that doesn’t belong there first and foremost. I mean, if we just run with the premise that somebody was capable of making a vehicle that could get them from somewhere far away to here, that was almost certainly mechanical. Now, I love the aliens thing where biomechanical is something that certainly could be and that would disintegrate. We wouldn’t see that at all, but I would expect some kind of technology that showed up out of the blue and changed things. That would be something. But I would think mechanical or a substance that’s not from here.
Lex Fridman
(03:07:18)
But of course we would only see the results of that mechanical. You mean literally a mechanical thing?
Ed Barnhart
(03:07:24)
Right. Some sort of thing like that. The typical thing people say is how did they move these giant stones? But just look at that on the face for a second. Aliens come from across the universe to meet humans, and the thing they tell them is how to move rocks? Are you fucking kidding me? I mean, give them antibiotics or a combustion engine or something. They came across the universe and they showed them how to move big rocks? I mean, that doesn’t make any sense. That just doesn’t make any sense.

Earth in 10,000 years

Lex Fridman
(03:08:03)
What do you think earth will look like 10,000 years from now?
Ed Barnhart
(03:08:09)
That’s an interesting question. I think it will be a lot more automated or it’ll be a smoldering pile. There is a possibility we could end ourselves. There’s always that possibility that we’ve really opened Pandora’s box in some regards. I did listen to one of your podcast guests with what would happen in the case of nuclear war. That was chilling. Her opinion was certainly we would burn everything to a crisp within minutes apparently. So we have that capacity. That’s scary. That’s a possible future for us. But I’m an optimist. I’d like to think that guys like you are going to make friendly robots who make my job better.
Lex Fridman
(03:08:55)
But 1,000, 10,000 years is a long time. And technology is improving and becoming more advanced rapidly, and the rate of that improvement is increasing ever more so.
Ed Barnhart
(03:09:10)
That’s the part that frightens me actually. I don’t know, does that frighten you?
Lex Fridman
(03:09:13)
Yes. Terrifying.
Ed Barnhart
(03:09:16)
I heard somebody say, I forget who it was. But systems of any kind, human systems, biological systems can be put on a graph that’s change over time and any graph that the change is way faster than the time and the line starts going straight up, that is a system in crisis. In almost any biological system that has that fast to change over that little of time, any other thing you’d describe it as a crisis. When you apply that chart to technologies change, it’s a crisis.
Lex Fridman
(03:09:59)
From that perspective, absolutely. But I also have a faith in human ingenuity that we humans like to create a really difficult situation and then come up with ways to get out of that difficult situation. And in so doing innovate and create a lot of awesome stuff and sometimes cause a lot of suffering. But on the whole, on average, make a better world. But with nuclear weapons, the bad stuff might actually lead to the death of everybody.
Ed Barnhart
(03:10:34)
I guess there’s always that chance, but I am an optimist. I think you’re an optimist too. I think exactly as you just said. I think that the greatest capacity of humans is our ability to innovate. And we are never more innovative than when we’re under distress. I think that a lot of the developments of humans over the last thousands of years have been about we didn’t change the world when we were comfortable. It was when we were in crisis. Necessity is the mother of invention. But I think we’ll be all right. I think that this impending climate crisis is real and happening. I actually personally think that I’m going to answer a question that you didn’t even ask me.

(03:11:25)
I think we’re wasting our time thinking that we can reverse this. We’re delusional. I’m all for electric cars and being good stewards of the environment, but we are wasting our time not technologically adapting to what’s about to happen. We’re spending too much time pretending, the average American thinks if we all just drive electric cars, we’ll be okay. That’s bullshit. That’s not going to happen. We need to start making technologies that desalinize water, a host of things that we need to use our technological capacity to accept it and adapt, instead of Pollyanna thinking we can make it go away.
Lex Fridman
(03:12:11)
Yeah, kind of accept that the world will change and a lot of big problems will arise and just develop technology that addresses them.
Ed Barnhart
(03:12:22)
I think you have some guys that have their finger on the pulse there. We need to start thinking about how we’re going to survive this, not that we’re going to make it go away.
Lex Fridman
(03:12:30)
And not just survive, thrive. Again, we’re pretty innovative in that regard. But if some catastrophic thing happens or we just leave this planet, what do you think would be found by aforementioned alien civilizations when they visit? The anthropologists, the grad student anthropologists that visit Earth and study, how much of what we know, have, and love and think of as human civilization will be lost do you think?
Ed Barnhart
(03:13:02)
Well, time moves on and things that are perishable perish. So you didn’t put a time element in there, but I would say that everything that can perish will, and whoever shows up here will be stuck with only the things that didn’t perish. So we’ll have buildings, plaques, but they won’t have any books. They won’t have any billboards. They’ll have the incomplete record I have. I one time did a talk in Sioux Falls and I said I drove in here and there was a big obelisk in front of the town. And everywhere I go, I see the names Lewis and Clark. And a thousand years from now, if I was an archeologist investigating this place, I would think that it was founded by the Egyptians and their kings were named Lewis and Clark. But the truth is, you know Lewis and Clark stayed one night here, but it’s just a big deal. So I would be so wrong about what I thought about your town based on what preserved.
Lex Fridman
(03:14:13)
It’s so beautiful as a thought experiment. What would archeologists be really wrong about? And what would they could possibly be right about?
Ed Barnhart
(03:14:22)
Washington D.C. was clearly made by a combination of the Egyptians and the Greeks and the Romans because that’s what all the architecture is.
Lex Fridman
(03:14:31)
Yeah. And would they be able to reconstruct the important empires, the powerful empires, and the warring empires?
Ed Barnhart
(03:14:41)
For that matter, have me and my colleagues done that at all? I am almost certain that the Maya would just gut laugh at what I think I know what they were.
Lex Fridman
(03:14:50)
I wonder, do you ever think about what we just as a human civilization are wrong about the most? Like mainstream archaeology. Just like a suspicion. What could we get completely wrong? Well, one way to get something wrong is totally lost civilization. An obviously gigantic civilization that was there along with the Maya or something like this in the 10,000 years ago.
Ed Barnhart
(03:15:17)
There’s certainly that. There could be things that were either wiped away or still hiding under the oceans that would completely change the way we think about things.
Lex Fridman
(03:15:26)
And everybody knew they existed and everybody interacted with them. It was [inaudible 03:15:31].
Ed Barnhart
(03:15:30)
I think it’s our estimation of their motivations that were probably most wrong on. My teacher Sheila a long time ago said, I’ve come up with all sorts of theories. I was always thinking about stuff. And she looked at me and she said, “If you don’t stop thinking like a western European and start trying to put yourself in the mindset of these people, you will never understand any of it.” Which I’ve always taken to heart. I mean, I really do. When I approach these things, I try to step out of my cultural assumptions, try to think like they would think as the best I could. And it’s very different. This whole, the Maya are cyclical, the whole sacrifice, we’re so obsessed with that. But that wasn’t an austere actual sacrifice on their part. They weren’t just, “Hey, let’s all get together and kill that guy that’s pissing us off.” I mean, they were giving the best of them. It was a different mentality. This was not brutal. This was a bonafide sacrifice on their part, a loss.
Lex Fridman
(03:16:38)
Plus the whole mystery of the puppy that eventually starts having sex with [inaudible 03:16:44].
Ed Barnhart
(03:16:44)
I’m going to unweave that one of these days.
Lex Fridman
(03:16:45)
One of these days. Now that puppy appeared on Pottery?
Ed Barnhart
(03:16:51)
All over Pottery. He’s everywhere. I got to write this book. This next year is the year I’m going to write my Fang deity book and I will have a whole chapter dedicated to the puppy.
Lex Fridman
(03:17:04)
The mystery solved. I mean, it could just be the birth of memes of humor. I don’t know. I mean, again, humor. You don’t know what the nature of their humor, of what their jokes are.
Ed Barnhart
(03:17:14)
Oh, that’s a neat one too. And that’s so human. I’ll tell you a little side story here, that when I worked with the Maya people in Palenque, I spent three years making this map of the city and hiking through the jungle every day. And they would talk to each other in their own language. [Celtal 03:17:34] was the group I was working with. But I noticed after a while they were big jokers. They loved to make jokes and they would laugh at jokes, but then they would also, one of them would say something and the other ones would go, hoo hoo. And I eventually asked, “What is that? Why do you guys always make that hoo hoo noise?” And he said, “That’s because…” He made a really smart pun. It was like he said three different things at once. It was a turn of phrase that was smart. And they didn’t make laughs at that. They had a noise for when somebody said something just super clever. So there’s also that just clever turn of speech.
Lex Fridman
(03:18:14)
Yeah. Wit.
Ed Barnhart
(03:18:15)
And I think about that when I’m a hieroglyphic translator. Here’s a beautiful thing that’s going to be like a poem or a political statement, and I’m just ploddingly looking in a dictionary of what that word means. There’s probably double, triple entendres all through this text. And the real meaning is the subtext. And I’m thinking they’re talking about corn and they’re talking about the nature of life.
Lex Fridman
(03:18:41)
It could be satire, it could be as it was in the Soviet Union when there’s a dictator, maybe there’s an overpowering king. You’re not allowed to actually speak. You have to hide the thing you’re actually trying to say in the subtext, in all of that.
Ed Barnhart
(03:19:00)
There was a funny Maya ceramic that had, the ceramics are neat, because the monuments can be kind of broken records. I’m the king, I was born this time, I beat these people up. I married this woman, I died. But the ceramics will tell us things out of mythology stories. And there was this one with a rabbit looking at the merchant God. And nobody could translate the text. And finally this eastern European, actually a Ukrainian guy translated it and the rabbit’s saying to the merchant God, “Bend over and smell my ass.” And like, oh man, we were expecting this wonderful piece of mythology. But no, it translates bend over and smell my ass. That’s great. That’s human.
Lex Fridman
(03:19:47)
As we mentioned previously, human nature does not change. You mentioned Palenque and mapping it. Just out of curiosity, what is that process like? It seems fascinating.
Ed Barnhart
(03:19:58)
Oh, it was a great adventure. I loved it, but it was difficult. I woke up every morning thinking I will be hurt today somehow. I don’t know how. I don’t know badly, where on my body it will occur, but it’s going to happen. It was the jungle.
Lex Fridman
(03:20:14)
So in the jungle, what’s the process like? What do you have to do to map it?
Ed Barnhart
(03:20:20)
Well, it was tricky too because it was also a national forest. So the forestry department didn’t want us to cut down anything more than we had to. So we basically just cut tunnels through the foliage and we’d map everything twice. The first thing we’d do is I’d go in, find a building, draw it on a piece of graph paper. And I’d say, “You guys go north. You guys go east, west. Find other buildings. And when you find them, pace back to this one.” And so I’d start making a map and I’d make the whole… One piece of graph paper was enough to. Then we’d bring the machine in, we’d bring the laser theodolite and get really accurate information. But on that piece of paper, I would write, “Don’t bring the machine this way. There’s a tree fall.” Or, “Stand on top of this building and you’ll see four different buildings at once from this one.”
Lex Fridman
(03:21:11)
And all of this is in dense jungle?
Ed Barnhart
(03:21:14)
Right. And the deeper we got off the road, the deeper it was. Sometimes it would clear out, but certain places, if it was low, it would be such thick vegetation and it would grow back so fast. Sometimes we would cut just tunnels through tall grass and we’d come back five days later and they were gone. We couldn’t even find where our trails were. They would grow back that fast.
Lex Fridman
(03:21:43)
But you see the building, so you could see?
Ed Barnhart
(03:21:45)
Right. And that was the fun part. I mean, sometimes it would just be a little neighborhood with little low buildings no bigger than this table, but sometimes just five more meters in and I’m standing under a pyramid that nobody had ever mapped. Like, wow, I’ve just found another one. And some days on good days, we’d find three pyramids. And I felt that’s such a more exciting job than the typical excavation, say. All my buddies were all just in a hole for the whole week in the middle of the city. And where I’m dancing around through the jungle, I could find 10 buildings today. I might find a pyramid today. Who knows?
Lex Fridman
(03:22:23)
What’s that feel like to find a pyramid or buildings that you are one of the only humans that are not from that civilization to ever see this thing? What’s that feel like?
Ed Barnhart
(03:22:32)
It’s great. I love that feeling. I am an explorer at heart, so finding something like that, when I was 25 years old, I found a whole Maya city. Got to name it, its name is Ma’ax Na. It’s off in the Belizean jungle. And that was just outrageous. I mean, it almost… That one almost depressed me. I had this great life ambition that I would find a lost city. And then I did it at 25 and I was like, God, now what do I do? I thought that was supposed to take me my whole life. I actually, I wrote a bunch of letters to NASA trying to get them to let me be the first archaeologist on Mars. I never got a single reply back. I’m sure I’m on NASA’s list as some weirdo.
Lex Fridman
(03:23:27)
How’d you find a Mayan city?
Ed Barnhart
(03:23:29)
I used a topography map of the area and I played the game. If I was a Maya, where would my favorite place to live in this big area be? I looked for the biggest mountain because they call all of their pyramids tune wheat stone mountains. I knew they loved mountains. And when I found that mountain, there were two others right next to it that made a triangle and they love those triads, and there were rivers in between them. And I thought, that’s it. That’s where I would build the city. And I hiked out there over two seasons with students. The other grad students were like, “He’s just having his students just wander in the jungle all day.” But I came back with a city.

Hope for the future

Lex Fridman
(03:24:11)
So given that you’ve looked into the deep past of humanity, what gives you hope about our future, maybe our deep future of this human civilization?
Ed Barnhart
(03:24:25)
That’s a good one, and I do have hope. I do have hope. I believe in the spirit of humankind. I as a person who have studied history, I kind of feel like history does kind of a sine wave. There’s highs and there’s lows, but no matter how low we go, we get up again and we climb. And I think that humanity will continue that. We will rise to the challenges. Now, some of the challenges may be created by ourselves as well, but we will adapt and overcome. That’s what we do.
Lex Fridman
(03:25:01)
Yeah, humans find a way, right? That’s the thing you see with history. Even when the empires collapse, the humans that come out of that, they pick themselves up and find another way. They build anew.
Ed Barnhart
(03:25:17)
And the people I study believe in the cyclical nature of life. That you really can’t, life can’t continue without death being part of the cycle. We get our lows, we get our highs, but the cycle continues forever.
Lex Fridman
(03:25:31)
I should mention that you have a lot of great lectures on the great courses, but you have also an amazing podcast, ArchaeoEd. If people want to listen to it, this is a tough question, but what would you recommend? What episodes should they listen to? What’s the answer?
Ed Barnhart
(03:25:54)
Oh, that is a tough question.
Lex Fridman
(03:25:56)
What is the sampling? It’s like asking a chef what’s the best stuff on the menu?
Ed Barnhart
(03:26:03)
Well, different strokes for different folks. I do two different things on that podcast. Sometimes I just teach about cultures that you’ve never heard about. I love… I start off by saying, “It’s my podcast and I’ll talk about whatever the heck I want to talk about.” Sometimes I talk about really specific things like a tool type or an animal type, but my favorite ones have become when I just tell my stories of my adventures. I’ve got a lot of weird adventure stories and it’s been fun and they’ve been very well received. I can put my humor in there and I can talk about the things that went right, the things that went wrong. The adventures that I had are all part of this ArchaeoEd thing. ArchaeoEd’s kind of a double entendre. It’s me, I’m just Ed. But it’s also education.

(03:26:53)
What I’m really trying to do with this too, it’s specifically the Americas. I want to be part of the reawakening that there were these great civilizations here, especially North America. I think that we have a group amnesia that there was no great civilizations here before Europe showed up. That’s simply not true. I think it should be part of our history books. In fact, I have a program called Before the Americas that would introduce as part of a American history, the part before European contact. And I think that kids in the K through 12 level should grow up not being told this fallacy that no one was here before we showed up in 1492. And one of these days I’m going to find a funder to help us put together Before the Americas and we’re going to make it part of the curriculum for every kid in the U.S. to know the full history of this country.
Lex Fridman
(03:27:55)
That’s a great project. Thank you so much. Thank you for talking today. Thank you for all the fascinating ideas that you put out into the world, and I can’t wait to hear your new course.
Ed Barnhart
(03:28:07)
Thank you so much, Lex. It was a real pleasure.
Lex Fridman
(03:28:10)
Thanks for listening to this conversation with Ed Barnhart. To support this podcast, please check out our sponsors in the description. And now let me leave you some words from Joseph Campbell. “Life is but a mask worn on the face of death, and is death then but another mask? How many can say, asks the Aztec poet, that there is or is not a truth beyond?” Thank you for listening and hope to see you next time.

Transcript for Michael Saylor: Bitcoin, Inflation, and the Future of Money | Lex Fridman Podcast #276

This is a transcript of Lex Fridman Podcast #276 with Michael Saylor.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Introduction

Michael Saylor
(00:00:00)
Remember George Washington, you know how he died? Well-meaning physicians bled him to death. And this was the most important patient in the country, maybe in the history of the country, and we bled him to death trying to help him. So when you’re actually inflating the money supply at 7%, but you’re calling it 2% because you want to help the economy, you’re literally bleeding the free market to death. But the sad fact is, George Washington went along with it because he thought that they were going to do him good. And the majority of the society, most companies, most conventional thinkers, the working class, they go along with this because they think that someone has their best interest in mind and the people that are bleeding them to death, they believe that prescription because their mental models are just so defective.
Lex Fridman
(00:01:00)
The following is a conversation with Michael Saylor, one of the most prominent and brilliant Bitcoin proponents in the world. He is the CEO of MicroStrategy, founder of Saylor Academy, graduate of MIT. And Michael was one of the most fascinating and rigorous thinkers I’ve ever gotten a chance to explore ideas with. He can effortlessly zoom out to the big perspectives of human civilization and human history, and zoom back in to the technical details of blockchains, markets, governments and financial systems. This is the Lex Fridman podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Michael Saylor.

Grading our understanding

Lex Fridman
(00:01:43)
Let’s start with a big question of truth and wisdom. When advanced humans or aliens or AI systems, let’s say, five to 10 centuries from now, look back at earth on this early 21st century, how much do you think they would say we understood about money and economics, or even about engineering, science, life, death, meaning, intelligence, consciousness, all the big interesting questions?
Michael Saylor
(00:02:12)
I think they would probably give us a B minus on engineering, on all the engineering things, the hard sciences.
Lex Fridman
(00:02:23)
A passing grade.
Michael Saylor
(00:02:25)
We’re doing okay. We’re working our way through rockets and jets and electric cars and electricity, transport systems and nuclear power, and space flight and the like. And if you look at the walls that the great court at MIT, it’s full of all the great thinkers and they’re all pretty admirable. If you could be with Newton or Gauss or Madame Curie or Einstein, you would respect them. I would say they’d give us a D minus on economics, an F plus or a D minus.
Lex Fridman
(00:03:08)
You see, they have an optimistic vision. First of all, optimistic vision of engineering because everybody you’ve listed, not everybody, but most people you’ve listed is just over the past couple of centuries, and maybe stretches a little farther back. But mostly all the cool stuff we’ve done in engineering is the past couple of centuries.
Michael Saylor
(00:03:26)
Archimedes had his virtues. I studied the history of science at MIT, and I also studied aerospace engineering. And so I clearly have a bias in favor of science. And if I look at the past 10,000 years, and I consider all of the philosophy and the politics and their impact on the human condition, I think it’s a wash. For every politician that came up with a good idea, another politician came up with a bad idea. And it’s not clear to me that most of the political and philosophical contributions to the human race and the human conditions have advanced so much. I mean, we’re still taking guidance and admiring Aristotle and Plato and Seneca and the like. And on the other hand, if you think about what has made the human condition better, fire, water, harnessing of wind energy, try to row across an ocean, not easy.
Lex Fridman
(00:04:34)
And for people who are just listening or watching, there’s a beautiful sexy ship from 16th, 17th century.
Michael Saylor
(00:04:43)
This is a 19th century handmade model of a 17th century sailing ship, which is of the type that the Dutch East Indias Company used to sail the world and trade. So the original was made sometime in the 1600s. And then this model is made in the 19th century by individuals.
Lex Fridman
(00:05:04)
Both the model and the ship itself is engineering at its best. And just imagine just like rockets flying out the space, how much hope this filled people with, exploring the unknown, going into the mystery, both the entrepreneurs and the business people and the engineers and just humans. What’s out there? What’s out there to be discovered?
Michael Saylor
(00:05:24)
Yeah, the metaphor of human beings leaving shore or sailing across the horizon, risking their lives in pursuit of a better life is an incredibly powerful one. In 1900, I suppose the average life expectancy is 50. During the Revolutionary War, while our founding fathers were fighting to establish life, liberty, pursuit of happiness, the constitution, average life expectancy was 32, somewhere between 32 and 36. So all the sound and the fury doesn’t make you live past 32, but what does? Antibiotics, conquest of infectious diseases. If we understand the science of infectious disease, sterilizing a knife and harnessing antibiotics gets you from 50 to 70, and that happened fast. That happens from 1900 to 1950 or something like that. And I think if you look at the human condition, you ever get on one of those rowing machines where they actually keep track of your watts output when you’re on the… 200 is a lot. Okay, 200 is a lot. So a kilowatt-hour is all the energy that a human, a trained athlete can deliver in a day.

(00:06:50)
And probably not 1% of the people in the world could deliver a kilowatt-hour in a day. And the commercial value of a kilowatt-hour, the retail value is 11 cents today, and the wholesale value is 2 cents. And so you have to look at the contribution of politicians and philosophers and economists to the human condition, and it’s like at best to wash one way or the other. And then if you look at the contribution of John D. Rockefeller when he delivered you a barrel of oil, and the energy in oil, liquid energy. Or the contribution of Tesla, as we deliver electricity. And what’s the impact on the human condition if I have electric power, if I have chemical power, if I have wind energy? If I can actually set up a reservoir, create a dam, spin a turbine, and generate energy from a hydraulic source, that’s extraordinary. And so our ability to cross the ocean, our ability to grow food, our ability to live, it’s technology that gets the human race from a brutal life where life expectancy is 30, to a world where life expectancy is 80.
Lex Fridman
(00:08:19)
You gave a D minus to the economists. So are they too, like the politicians, the wash in terms of there’s good ideas and bad ideas, and that tiny delta between good and bad is how you squeak pass the F plus onto the D minus territory?
Michael Saylor
(00:08:36)
I think most economic ideas are bad ideas.
Lex Fridman
(00:08:39)
Most?
Michael Saylor
(00:08:42)
Take us back to MIT and you want to solve a fluid dynamics problem. Design the shape of the hull of that ship. Or you want to design an airfoil, a wing. Or if you want to design an engine or a nozzle in a rocket ship, you wouldn’t do it with simple arithmetic, you wouldn’t do it with a scalar. There’s not a single number, right? It’s vector math. Computational fluid dynamics is n-dimensional, higher-level math, complicated stuff. So when an economist says the inflation rate is 2%, that’s a scalar. And when an economist says it’s not a problem to print more money because the velocity of the money is very low, monetary velocity is low. That’s another scalar. Okay.

(00:09:34)
So the truth of the matter is, inflation is not a scalar. Inflation is an n-dimensional vector. Money velocity is not a scalar. Saying, “What’s the velocity of money?” Oh, it’s slow or it’s fast. It ignores the question of what medium is the money moving through? And the same way that, what’s the speed of sound? Okay, well, what is sound, right? Sound is a compression wave. It’s energy moving through a medium, but the speed is different. So for example, the speed of sound through air is different than the speed of sound through water. And sound moves faster through water, it moves faster through a solid, and it moves faster through a stiffer solid. So there isn’t one.
Lex Fridman
(00:10:27)
What is the fundamental problem with the way economists reduce the world down to a model? Is it too simple or is it just even the first principles of constructing the model is wrong?
Michael Saylor
(00:10:37)
I think that the fundamental problem is, if you see the world as a scalar, you simply pick the one number which supports whatever you want to do, and you ignore the universe of other consequences from your behavior.
Lex Fridman
(00:10:57)
In general, I don’t know if you’ve heard of Eric Watson has been talking about this with Gage Theory, so different kinds of approaches from the physics world, from the mathematical world to extend past this scalar view of economics. So Gage Theory is one way that comes from physics. Do you find that a way of exploring economics, interesting? So outside of cryptocurrency, outside of the extra technologies and so on, just analysis of how economics works, do you find that interesting?
Michael Saylor
(00:11:30)
Yeah, I think that if we’re going to want to really make any scientific progress in economics, we have to apply much more computationally intensive and richer forms of mathematics.
Lex Fridman
(00:11:43)
So simulation perhaps, or…
Michael Saylor
(00:11:45)
Yeah. When I was at MIT I studied system dynamics. They taught it at the Sloan school. It was developed by Jay Forrester who was an extraordinary computer scientist. And when we created models of economic behavior, they were all multidimensional nonlinear models. So if you want to describe how anything works in the real world, you have to start with the concept of feedback. If I double the price of something, demand will fall and attempts to create supply will increase and there will be a delay before the capacity increases. There’ll be an instant demand change, and there’ll be rippling effects throughout every other segment of the economy downstream and upstream of such thing.

(00:12:37)
So it’s common sense, but most economics, most classical economics, it’s always taught with linear models, fairly simplistic linear models. And oftentimes, I’m really shocked today that the entire mainstream dialogue of economics has been captured by scalar arithmetic. For example, if you read any article in New York Times or the Wall Street Journal, right, they just refer to there’s an inflation number or the CPI, or the inflation rate is X. And if you look at all the historic studies of the impact of inflation, generally they’re all based upon the idea that inflation equals CPI, and then they try to extrapolate from that and you just get nowhere with it.
Lex Fridman
(00:13:32)
So at the very least, we should be considering inflation and other economics concept is a nonlinear, dynamical system. So nonlinearity, and also just embracing the full complexity of just how the variables interact, maybe through simulation, maybe have some interesting models around that.
Michael Saylor
(00:13:50)
Wouldn’t it be refreshing if somebody for once published a table of the change in price of every product, every service, and every asset and every place, over time?

Inflation

Lex Fridman
(00:14:01)
You said table. Some of that also is the task of visualization, how to extract from this complex set of numbers, patterns that somehow indicate something fundamental about what’s happening. So summarization of data is still important. Perhaps summarization not down to a single scale of value, but looking at that whole sea of numbers, you have to find patterns like what is inflation in a particular sector? What does it maybe change over time, maybe different geographical regions, things of that nature. I think that’s, I don’t know even what that task is. That’s what you could look at machine learning, you can look at AI with that perspective, which is how do you represent what’s happening efficiently, as efficiently as possible? That’s never going to be a single number, but it might be a compressed model that captures something beautiful, something fundamental about what’s happening.
Michael Saylor
(00:15:02)
It’s an opportunity for sure. If we take, for example, during the pandemic, the response of the political apparatus was to lower interest rates to zero, and to start buying assets, in essence printing money. And the defense was, there’s no inflation. But of course you had one part of the economy where it was locked down, so it was illegal to buy anything. It was either illegal or it was impractical, so it would be impossible for demand to manifest. So of course, there is no inflation. On the other hand, there was instantaneous immediate inflation in another part of the economy, for example, you lowered the interest rates to zero. At one point, we saw the swap rate on a 30-year note go to 72 basis points. Okay. That means that the value of a long-dated bond immediately inflates.

(00:16:09)
So the bond market had hyperinflation within minutes of these financial decisions. The asset market had hyperinflation. We had what you call a K-shaped recovery, what we affectionately call a K-shaped recovery. Main street shut down, Wall Street recovered all within six weeks. The inflation was in the assets, in the stocks, in the bonds. If you look today, you see that typical house, according to the Case-Shiller index today is up 19.2% year over year. So if you’re a first time home buyer, the inflation rate is 19%. The formal CPI announced a 7.9%. You can pretty much create any inflation rate you want by constructing a market basket, a weighted basket of products or services or assets that yield you the answer. I think that the fundamental failing of economists is, first of all, they don’t really have a term for asset inflation.
Lex Fridman
(00:17:24)
What’s an asset? What’s asset hyperinflation? You mentioned bottom market swap rate and asset is where the majority of the hyperinflation happen. What’s inflation? What’s hyperinflation? What’s an asset? What’s an asset market? I’m going to ask so many dumb questions.
Michael Saylor
(00:17:40)
In the conventional economic world, you would treat inflation as the rate of increase in price of a market, basket of consumer products, defined by a government agency.
Lex Fridman
(00:17:56)
So they have traditional things that a regular consumer would be buying. The government selects like toilet paper, food toaster, refrigerated electronics, all that kind of stuff. And it’s like a representative basket of goods that lead to a content existence on this earth for a regular consumer.
Michael Saylor
(00:18:19)
They define a synthetic metric. I mean, I’m going to say you should have a thousand square foot apartment and you should have a used car, and you should eat three hamburgers a week. Now, 10 years go by and the apartment costs more. I could adjust the market basket via, they call them hedonic adjustments. I could decide that it used to be a 1970 needed a thousand square feet, but in the year 2020, you only need 700 square feet because we’ve miniaturized televisions and we’ve got more efficient electric appliances. And because things have collapsed into the iPhone, you just don’t need as much space. So now it may be that the apartment costs 50% more, but after the hedonic adjustment, there is no inflation because I just downgraded the expectation of what a normal person should have.
Lex Fridman
(00:19:11)
So the synthetic nature of the metric allows for manipulation by people in power.
Michael Saylor
(00:19:17)
Pretty much. I guess, my criticism of economists is rather than embracing inflation based upon its fundamental idea, which is the rate at which the price of things go up. They’ve been captured by mainstream conventional thinking to immediately equate inflation to the government issued CPI or government issued PCE or government issued PPI measure, which was never the rate at which things go up. It’s simply the rate at which a synthetic basket of products and services the government wishes to track, go up. Now, the problem with that is two big things. One thing is the government gets to create the market basket, and so they keep changing what’s in the basket over time.

(00:20:13)
So I mean, if I said three years ago, you should go see 10 concerts a year, and the concert tickets now cost $200 each. Now it’s $2,000 a year to go see concerts. Now I’m in charge of calculating inflation. So I redefine your entertainment quota for the year to be eight Netflix streaming concerts, and now they don’t cost $2,000. They cost nothing, and there is no inflation, but you don’t get your concerts right? So the problem starts with continually changing the definition of the market basket, but in my opinion, that’s not the biggest problem. The more egregious problem is the fundamental idea that assets aren’t products or services. Assets can’t be inflated.
Lex Fridman
(00:21:02)
What’s an asset?
Michael Saylor
(00:21:03)
A house, a share of Apple stock, a bond, a Bitcoin is an asset or a Picasso painting.
Lex Fridman
(00:21:17)
Not a consumable good, not an apple that you can eat.
Michael Saylor
(00:21:23)
Right. If I throw away an asset, then I’m not on the hook to track the inflation rate for it. So what happens if I change the policy such that, let’s take the class example. A million dollar bond at a 5% interest rate gives you $50,000 a year in risk-free income. You might retire on $50,000 a year in a low cost jurisdiction. So the cost of social security or early retirement is $1 million when the interest rate is 5%. During the crisis of March of 2020, the interest rate went on a 10-year bond went to 50 basis points. So now the cost of that bond is $10 million. The cost of social security went from a million dollars to $10 million. So if you wanted to work your entire life, save money and then retire risk-free and live happily ever after on a $50,000 salary, living on a beach in Mexico, wherever you wanted to go, you had hyperinflation, the cost of your aspiration increased by a factor of 10 over the course of some amount of time.

(00:22:30)
In fact, in that case, that was over the course of about 12 years. As the inflation rate ground down, the asset traded up. But the conventional view is, “Oh, that’s not a problem because it’s good that the bond is highly priced because we own the bond.” Or what’s the problem with the inflation rate in housing being 19%? It’s an awful problem for a 22-year-old that’s starting their first job, that’s saving money to buy a house. But it would be characterized as a benefit to society by a conventional economist who would say, “Well, housing asset values are higher because of interest rate fluctuation, and now the economy’s got more wealth.” And so that’s viewed as a benefit.
Lex Fridman
(00:23:20)
So what’s being missed here? The suffering of the average person or the struggle, the suffering, the pain of the average person, like metrics that captured that within the economic system. When you’re talking about-
Michael Saylor
(00:23:38)
One way to say it is, a conventional view of inflation as CPI understates the human misery that’s inflicted upon the working class and on mainstream companies, by the political class. And so it’s a massive shift of wealth from the working class to the property class. It’s a massive shift to power from the free market to the centrally governed or the controlled market. It’s a massive shift to power from the people to the government. And maybe one more illustrative point here, Lex is, what do you think the inflation rate’s been for the past a 100 years?
Lex Fridman
(00:24:25)
Oh, we talking about the scalar again?
Michael Saylor
(00:24:28)
If you took a survey of everybody on the street and you asked them what do they think inflation was, what is it? You remember when Jerome Powell said, our target’s 2%, but we’re not there. If you go around the corner, I have posted the deed to this house sold in 1930, okay. And the number on that deed is $100,000, 1930. And if you go on Zillow and you get the Z estimate-
Lex Fridman
(00:24:58)
Is it higher than that? No?
Michael Saylor
(00:25:00)
$30,500,000. So that’s 92 years, 1930 or 2022, and in 92 years, we’ve had 305X increase in price of the house. Now if you actually calculate, you come to a conclusion that the inflation rate was approximately 6.5% a year every year for 92 years. And there’s nobody in government, no conventional economists who would ever admit to an inflation rate of 7% a year in the US dollar over the last century. Now, if you dig deeper, I mean, one guy that’s done a great job working on this is Saifedean Ammous, who wrote the book, The Bitcoin Standard. And he notes that on average it looks like the inflation rate and the money supply is about 7% a year all the way up to the year 2020.

(00:26:03)
If you look at the S&P index, which is a market basket of scarce, desirable stocks, it returned about 10%. If you talk to 10% a year for a 100 years, the money supply is expanding at 7% a 100 years. If you actually talk to economists or you look at the economy and you ask the question, “How fast does the economy grow in its entirety year over year?” Generally about two to 3%, the sum total impact of all this technology and human ingenuity might get you a two and a half, 3% improvement a year.
Lex Fridman
(00:26:39)
As measured by GDP. Are you okay with that question?
Michael Saylor
(00:26:44)
I’m not sure I’d go that far yet, but I would just say that if you had the human race doing stuff, and if you ask the question, “How much more efficiently will we do the stuff next year than this year?” Or, “What’s the value of all of our innovations and inventions and investments in the past 12 months?” You’d be hard-pressed to say, we get 2% better. Typical investor thinks they’re 10% better every year. So if you look at what’s going on really, when you’re holding a million dollars of stocks and you’re getting a 10% gain a year, you’re really get a 7% expansion of the money supply. You’re getting a two or 3% gain under best circumstances. And another way to say that is, if the money supply stopped expanding at 7% a year, the S&P yield might be 3% and not 10%. It probably should be.

(00:27:42)
Now, that gets you to start to ask a bunch of other fundamental questions. Like, if I borrow a billion dollars and pay 3% interest and the money supply expands at seven to 10% a year, and I ended up making a 10% return on a billion dollars investment, paying 3% interest, is that fair? And who suffered so that I could do that? Because in an environment where you’re just inflating the money supply and you’re holding the assets constant, it stands the reason that the price of all the assets is going to appreciate somewhat proportional to the money supply, and the difference in asset appreciation is going to be a function of the scarce, desirable quality of the assets, and to what extent can I make more of them, and to what extent are they truly limited in supply?
Lex Fridman
(00:28:37)
Yeah. So we will get to a lot of the words you said there, the scarcity and so connected to how limited they are and the value of those assets. But you also said, so the expansion of the money supply, which is put in other ways, is printing money. And so is that always bad? The expansion of the money supply, just to put some terms on the table so we understand them. You nonchalantly say it’s always on average expanding every year. The money supply is expanding every year by 7%. That’s a bad thing. That’s a universally bad thing.
Michael Saylor
(00:29:17)
It’s awful. I guess to be precise, it’s the currency. I would say money is monetary energy or economic energy, and the economic energy has to find its way into a medium. So if you want to move it rapidly as a medium of exchange, it has to find its way into currency, but the money can also flow into property like a house or gold. If the money flows into property, it’ll probably hold its value much better. If the money flows into currency… If you had put a $100,000 in this house, you would have 305X return over 92 years. But if you had put the money a $100,000 into safe deposit box and buried it in the basement, you would’ve lost 99.7% of your wealth over the same time period. So the expansion of the currency creates a massive inefficiency in the society, what I’ll call an adiabatic lapse. What we’re doing is we’re bleeding the civilization to death.
Lex Fridman
(00:30:30)
What’s the adiabatic… What’s that word?
Michael Saylor
(00:30:31)
Adiabatic lapse.
Lex Fridman
(00:30:32)
Adiabatic.
Michael Saylor
(00:30:34)
In aerospace engineering, you want to solve any problem. They start with the phrase assume an adiabatic system. And what that means is a closed system.
Lex Fridman
(00:30:44)
Okay.
Michael Saylor
(00:30:44)
So-
Lex Fridman
(00:30:45)
I’ve got it.
Michael Saylor
(00:30:45)
… I’ve got a container. And in that container, no air leaves and no air enters. No energy exits or enters. So it’s a closed system.
Lex Fridman
(00:30:54)
So you got the closed system lapse.
Michael Saylor
(00:30:57)
Okay, I’m going to use a-
Lex Fridman
(00:31:00)
There’s a leak in the ship.
Michael Saylor
(00:31:03)
… physical metaphor for you, because you’re into jujitsu. You got 10 pints of blood in your body, and so before your next workout, I’m going to take one pint from you. Now you’re going to go exercise, but you’ve lost 10% of your blood. You’re not going to perform as well. It takes about one month for your body to replace the red blood platelets. So what if I tell you every month you got to show up and I’m going to bleed you? Okay, so if I’m draining the energy, I’m draining the blood from your body. You can’t perform. Adiabatic lapse is when you go up an altitude. Every thousand feet, you lose three degrees.

(00:31:45)
You go at 50,000 feet, you’re 150 degrees colder than sea level. That’s why you look at your instruments and instead of 80 degrees, you’re minus 70 degrees. Why is the temperature falling? Temperature’s falling because it’s not a closed system, it’s an open system. As the air expands, the density falls, the energy per cubic, whatever falls, and therefore the temperature falls. The heat’s falling out of the solution. So when you’re inflating, let’s say you’re inflating the money, the currency supply by 6%, you’re sucking 6% of the energy out of the fluid that the economy is using to function.
Lex Fridman
(00:32:34)
So the currency, this ocean of currency, that’s a nice way for the economy to function. It’s being inefficient when you expand the money supply, but it’s the liquid. I’m trying to find the right adjective here. It’s how you do transactions at a scale of billions.
Michael Saylor
(00:32:54)
Currency is the asset we use to move monetary energy around, and you could use the dollar or you could use the peso or you could use the boulevard.
Lex Fridman
(00:33:04)
Selling houses and buying houses is much more inefficient, or you can’t transact between billions of people with houses.
Michael Saylor
(00:33:14)
Yeah. Properties don’t make such good mediums of exchange. They make better stores of value and they have utility value if it’s a ship or a house or a plane or a bushel of corn.

Government

Lex Fridman
(00:33:29)
Can we zoom out, keep zooming out into, we reach the origin of human civilization, but on the way ask, you gave economists a D minus. I’m not even going to ask you what you give to governments. Do you think their failure, economists and government failure is malevolence or incompetence?
Michael Saylor
(00:33:53)
I think policy makers are well-intentioned, but generally all government policy is inflationary and it’s inflammatory and inflationary. So what I-
Michael Saylor
(00:34:00)
… and all government, it’s inflammatory and inflationary. So what I mean by that is when you have a policy pursuing supply chain independence, if you have an energy policy, if you have a labor policy, if you have a trade policy, if you have any kind of foreign policy, a domestic policy, a manufacturing policy, every one of these, a medical policy, every one of these policies interferes with the free market and generally prevents some rational actor from doing it in a cheaper, more efficient way. So when you layer them on top of each other, they all have to be paid for. If you want to shut down the entire economy for a year, you have to pay for it, right? If you want to fight a war, you have to pay for it, right? If you don’t want to use oil or natural gas, you have to pay for it. If you don’t want to manufacture semiconductors in China and you want to manufacture them in the U.S., you got to pay for it.

(00:35:03)
If I rebuild the entire supply chain in Pennsylvania and I hire a bunch of employees and then I unionize the employees, then not only am I… I idle the factory in the Far East, it goes to 50% capacity. So whatever it sells, it has to raise the price on, and then I drive up the cost of labor for every other manufacturer in the U.S. because I’m competing against them, right? I’m changing that condition. So everything gets less efficient, everything gets more expensive, and of course, the government couldn’t really pay for its policies and its wars with taxes. We didn’t pay for World War I with tax. We didn’t pay for World War II with tax. We didn’t pay for Vietnam with tax. In fact, when you trace this, what you realize is the government never pays for all of its policies with taxes. It pays for-
Lex Fridman
(00:35:54)
Because it’s super painful to ask to raise the taxes to truly transparently pay for the things you’re doing with taxes, with taxpayer money because they feel the pain.
Michael Saylor
(00:36:05)
That’s one interpretation or it’s just too transparent. If people understood the true cost-
Lex Fridman
(00:36:12)
Of war, they wouldn’t want to go to war.
Michael Saylor
(00:36:15)
If you were told that you would lose 95% of your assets and 90% of everything you will be ever will be taken from you, you might re-prioritize your thought about a given policy and you might not vote for that politician.
Lex Fridman
(00:36:31)
But you’re still saying incompetence not malevolence. So fundamentally, government creates a bureaucracy of incompetence is how you look at it.
Michael Saylor
(00:36:42)
I think a lack of humility, if people had more humility than they would realize-
Lex Fridman
(00:36:51)
Humility about how little they know, how little they understand about the function of complex systems.
Michael Saylor
(00:36:58)
It’s a phrase from Clint Eastwood’s movie Unforgiven where he says, “A man’s got to know his limitations.” I think that a lot of people overestimate what they can accomplish and experience in life causes you to reevaluate that. So I’ve done a lot of things in my life and generally, my mistakes were always my good ideas that I enthusiastically pursued to the detriment of my great ideas that required 150% of my attention to prosper. So I think people pursue too many good ideas, and they all sound good, but there’s just a limit to what you can accomplish. And everybody underestimates the challenges of implementing an idea, and they always overestimate the benefits of the pursuit of that.

(00:37:58)
And so I think it’s an overconfidence that causes an over-exuberance in pursuit of policies. As the ambition of the government expands, so must the currency supply. I could say the money supply, but let’s say the currency supply. You can triple the number of pesos in the economy, but it doesn’t triple the amount of manufacturing capacity in the set economy, and it doesn’t triple the amount of assets in the economy. It just triples the pesos. So as you increase the currency supply, then the price of all those scarce desirable things will tend to go up rapidly. And the confidence of all of the institutions, the corporations and the individual actors and trading partners will collapse.
Lex Fridman
(00:38:53)
If we take a tangent on a tangent, and we will return soon to the big human civilization question. So if government naturally wants to buy stuff it can’t afford, what’s the best form of government? Anarchism. Libertarianism. So there’s not even armies. There’s no borders that’s anarchism-
Michael Saylor
(00:38:53)
The least.
Lex Fridman
(00:39:23)
The smallest possible, the less the-
Michael Saylor
(00:39:27)
The best government would be the least, and the debate will be over that.
Lex Fridman
(00:39:32)
When you think about this stuff, do you think about, “Okay, government is the way it is, I, as a person that can generate great ideas, how do I operate in this world?” Or do you also think about the big picture? If we start a new civilization somewhere on Mars, do you think about what’s the ultimate form of government? What’s at least a promising thing to try?
Michael Saylor
(00:40:02)
I have laser eyes on my profile on-
Lex Fridman
(00:40:05)
Yes-
Michael Saylor
(00:40:05)
… Twitter, Lex.
Lex Fridman
(00:40:06)
… we’ve noticed. What does that mean?
Michael Saylor
(00:40:07)
And the significance of laser eyes is to focus on the thing that can make a difference.
Lex Fridman
(00:40:13)
Yes.
Michael Saylor
(00:40:14)
And if I look at the civilization, I would say half the problems in the civilization are due to the fact that our understanding of economics and money is defective. Half, 50%, I don’t know, it’s worth $500 trillion worth of problems? Money represents all the economic energy and the civilization and it equates to all the products, all the services and all the assets that we have and whatever we’re going to have. So that’s half. The other half of the problems in the civilization are medical and military and political and philosophical and natural. And I think that there are a lot of different solutions to all those problems, and they are all honorable professions and they all merit a lifetime of consideration for the specialists in all those areas. I think that what I could offer its constructive is inflation is completely misunderstood. It’s a much bigger problem than we understand it to be.

(00:41:37)
We need to introduce engineering and science techniques into economics if we want to further the human condition. All government policy is inflationary. And another pernicious myth is inflation is always and everywhere a monetary phenomena. A famous quote by Milton Friedman, I believe, it’s like, it’s a monetary phenomena that is inflation comes from expanding the currency supply. It’s a nice phrase and it’s oftentimes quoted by people that are anti-inflation. But again, it just signifies a lack of appreciation of what the issue is. If I had a currency which was completely non-inflationary, if I never printed another dollar and if I eliminated fractional reserve banking from the face of the earth, we’d still have inflation, and we’d have inflation as long as we have government that is capable of pursuing any kind of policies that are in themself inflationary, and generally, they all are.
Lex Fridman
(00:42:43)
So in general, inflationary is the big characteristic of human nature that’s government collection of groups that have power over others and allocate other people’s resources will try to intentionally or not hide the costs of those allocations in some tricky ways. Whatever the options ever are available.
Michael Saylor
(00:43:08)
Hiding the cost is like the tertiary thing. The primary goal is the government will attempt to do good, right? And-
Lex Fridman
(00:43:18)
That’s the primary problem?
Michael Saylor
(00:43:21)
They will attempt to do good and they will do good imperfectly, and they will create oftentimes as much damage… more damage than the good they do. Most government policy will be iatrogenic. It will create more harm than good in the pursuit of it, but it is what it is. The secondary issue is they will unintentionally pay for it by expanding the currency supply without realizing that they’re actually paying for it in a suboptimal fashion. They’ll collapse their own currencies while they attempt to do good. The tertiary issue is they will mismeasure how badly they’re collapsing the currency. So for example, if you go to the Bureau of Labor Statistics and look at the numbers printed by the Fed, they’ll say, “Oh, it looks like the dollar’s lost 95% of its purchasing power over 100 years.” They sort of fess up that there’s a problem, but they make it 95% loss over 100 years. What they don’t do is realize it’s a 99.7% loss over 80 years.

(00:44:34)
So they will mismeasure just the horrific extent of the monetary policy in pursuit of the foreign policy and the domestic policy, which they overestimate their budget and their means to accomplish their ends, and they underestimate the cost. And they’re oblivious to the horrific damage that they do to the civilization because the mental models that they use that are conventionally taught are wrong. The mental model that it’s okay, we can print all this money because the velocity of the money is low because money velocity is a scalar and inflation is the scalar, and we don’t see 2% inflation yet, and the money velocity is low, and so it’s okay if we print trillions of dollars. Well, the money velocity was immediate. The velocity of money through the crypto economy is 10,000 times faster than the velocity of money through the consumer economy. I think Nic pointed out when you spoke to him, he said it takes two months for a credit card transaction to settle, right? So you spend a million dollars in the consumer economy, you can move it six times a year.

(00:45:59)
You put the million dollars into gold, gold will sit in a vault for a decade. Okay? So the velocity of money through gold is 0.1. You put the money in the stock market and you can trade it once a week. The settlement is T+2. Maybe you get to 2:1 leverage, you might get to a money velocity of 100 a year. In the stock market, you put your money into the crypto economy and these people are settling every four hours. And if you’re offshore, they’re trading with 20x leverage. So if you settle every day and you trade the 20x leverage, you just went to 7,000. So the velocity of the money varies. I think the politicians, they don’t really understand inflation and they don’t understand economics, but you can’t blame them because the economists don’t understand economics. Because if they did, they would be creating multivariate computer simulations where they actually put in the price of every piece of housing and every city in the world the full array of foods and the full array of products and the full array of assets.

(00:47:12)
And then on a monthly basis they would publish all those results. And that’s a high bandwidth requirement, and I think that people don’t really want to embrace it. There’s that phrase, you can’t tell people what to think, but you can tell them what to think about. The most pernicious thing is I get you to misunderstand the phenomena so that even when it’s happening to you, you don’t appreciate that it’s a bad thing and you think it’s a good thing. So if housing prices are going up 20% year over year, and I say this is great for the American public ’cause most of them are homeowners, then I have misrepresented a phenomena. Inflation is 20%, not 7%, and then I have misrepresented it as being a positive rather than a negative, and people will stare at it. And you could even show them their house on fire and they would perceive it as being great because it’s warming them up and they’re going to save on their heat cost.
Lex Fridman
(00:48:22)
It does seem that the cruder of the model, whether it’s economics, whether it’s psychology, the easier it is to weave whatever the heck narrative you want. And not in a malicious way, but just like it’s some kind of emergent phenomena, this narrative thing that we tell ourselves. So you can tell any kind of story about inflation. Inflation is good. Inflation is bad. Like the cruder the model, the easier it is to tell a narrative about it. So if you take an engineering approach, I feel like it becomes more and more difficult to run away from a true deep understanding of the dynamics of the system.
Michael Saylor
(00:49:06)
Honestly, if you went to 100 people on the street and you ask them to define inflation, how many would say it’s a vector tracking the change in price of every product service asset in the world over time?
Lex Fridman
(00:49:22)
No.
Michael Saylor
(00:49:22)
Not many.
Lex Fridman
(00:49:23)
Not many.
Michael Saylor
(00:49:25)
If you went to them and you said, “Do you think 2% inflation a year is good or bad?” The majority would probably say, “Well, I heat it’s good.” The majority of economists would say 2% inflation a year is good, and of course, look at the ship next to us. What if I told you that the ship leaked 2% of its volume every something? The ship is rotting 2% a year. That means the useful life of the ship is 50 years. Now, ironically, that’s true. Like a wooden ship had a 50-year to 100- year life. 100 would be long, 50 years, not unlikely. So when we built ships out of wood, they had a useful life of about 50 years, and then they sunk and they rotted. There’s nothing good about it. You build a ship out of steel and it’s 0 as opposed to 2% degradation, and how much better is 0% versus 2%?

(00:50:25)
Well, 2% means you have a useful life of it’s half life of 35 years. 2% is a half life of 35 years. That’s basically the half life of money in gold. If I store your life force in gold, under perfect circumstances, you have a useful life of 35 years. 0% is a useful life of forever. So 0% is immortal, 2% is 35 years average life expectancy. So the idea that you would think the life expectancy of the currency and the civilization should be 35 years instead of forever is a silly notion. But the tragic notion is it was 7 into 70 or 10 years.

(00:51:12)
The money has had a half life of 10 years except for the fact that in weak societies and Argentina or the like, the half-life of the money is three to four years; in Venezuela, one year. So the United States dollar and the United States economic system was the most successful economic system in the last 100 years in the world. We won every war. We were the world’s superpower. Our currency lost 99.7% of its value, and that means horrifically every other currency lost, right? In essence, the other ones were 99.9, except for most that were 100% because they all completely failed. And you’ve got a mainstream economic community that thinks that inflation is a number and 2% is desirable. It’s like, remember George Washington? You know how he died?
Lex Fridman
(00:52:16)
No.
Michael Saylor
(00:52:18)
Well-meaning physicians bled him to death. Okay? The last thing in the world you would want to do to a sick person is bleed them in the modern world. I think we understand that oxygen is carried by the blood cells, and if… There’s that phrase, triage phrase, what’s the first thing you do in an injury? Stop the bleeding. Single first thing, right? You show up after any accident, I look at you, stop the bleeding because you’re going to be dead in a matter of minutes if you bleed out. So it strikes me as being ironic that orthodox conventional wisdom was bleed the patient to death. And this was the most important patient in the country, maybe in the history of the country, and we bled him to death trying to help him.

(00:53:14)
So when you’re actually inflating the money supply at 7% but you’re calling it 2% because you want to help the economy, you’re literally bleeding the free market to death. But the sad fact is George Washington went along with it ’cause he thought that they were going to do him good. And the majority of the society, most companies, most conventional thinkers, the working class, they go along with this because they think that someone has their best interest in mind. And the people that are bleeding them to death, they believe that prescription because their mental models are just so defective and they’re understanding of energy and engineering and the economics that are at play is crippled by these mental models.
Lex Fridman
(00:54:11)
But that’s both the bug in the future of human civilization that ideas take hold, that unite us. We believe in them, and we make a lot of cool stuff happen by, as an average, just the fact of the matter, a lot of people believe the same thing. They get together and they get some shit done because they believe that thing. And then some ideas can be really bad and really destructive. But on average, the ideas seem to be progressing in a direction of good. Let me just step back. What the hell are we doing here, us humans on this earth? How do you think of humans? How special are humans? How did human civilization originate on this earth, and what is this human project they’re all taking on? You mentioned fire and water, and apparently bleeding you to death is not a good idea. I always thought you can get the demons out in that way, but that was a recent invention. So what’s this thing we’re doing here?

War and power

Michael Saylor
(00:55:20)
I think what distinguishes human beings from all the other creatures on the earth is our ability to engineer. We’re engineers, right?
Lex Fridman
(00:55:31)
To solve problems or just build incredible cool things?
Michael Saylor
(00:55:38)
Engineering, harnessing energy and technique to make the world a better place than you found it. From the point that we actually started to play with fire, that was a big leap forward. Harnessing the power of kinetic energy and missiles, another step forward, every city built on water. Why water? Well, water’s bringing energy, right? If you actually put a turbine on a river or you capture a change in elevation of water, you’ve literally harnessed gravitational energy, but water’s also bringing you food. It’s also giving you a cheap form of getting rid of your waste. It’s also giving you free transportation. You want to move one ton blocks around, you want to move them in water. So I think the human story is really the story of engineering a better world. And the rise in the human condition is determined by those groups of people, those civilizations that were best at harnessing energy, right?

(00:56:55)
If you look the Greek civilization, they built it around ports and seaports and water and created a trading network. The Romans were really good at harnessing all sorts of engineering. The aqueducts are a great example. If you go to any big city, you travel through cities in the Med, you find that the carrying capacity of the city or the island is 5,000 people without running water. And then if you can find a way to bring water to it it increases by a factor of 10. And so human flourishing is really only possible through that channeling of energy that eventually takes the form of air power. That ship, look at the intricacy of those sails. Just the model is intricate. Now, think about all of the experimentation that took place to figure out how many sails to put on that ship and how to rig them and how to repair them and how to operate them.
Lex Fridman
(00:57:59)
It’s thousands of lives spent thinking through all the tiny little details all to increase the effectiveness, the efficiency of this ship as it sails thru water. And we should also note there’s a bunch of cannons on the side. So obviously-
Michael Saylor
(00:58:18)
Another form of engineering, energy harnessing with explosives.
Lex Fridman
(00:58:23)
To achieve what end? That’s another discussion. Exactly.
Michael Saylor
(00:58:27)
Suppose we’re trying to get off the planet, right?
Lex Fridman
(00:58:30)
Well, there’s a selection mechanism going on, so natural selection it’s’… However evolution works, it seems that one of the interesting inventions on earth was the predator/prey dynamic, that you want to be the bigger fish, that violence seems to serve a useful purpose if you look at earth as a whole. We as humans now like to think of violence as really a bad thing. It seems to be one of the amazing things about humans is we’re ultimately tend towards cooperation. We like peace. If you just look at history, we want things to be nice and calm. But just wars break out every once in a while and lead to immense suffering and destruction and so on, and they have a resetting the palette effect. It’s one that’s full of just immeasurable human suffering, but it’s like a way to start over.
Michael Saylor
(00:59:34)
We’re clearly apex predator on the planet. And I Googled something the other day, “What’s the most common form of mammal life on the earth?”
Lex Fridman
(00:59:47)
By number of organisms?
Michael Saylor
(00:59:48)
Count.
Lex Fridman
(00:59:49)
By count?
Michael Saylor
(00:59:49)
And the answer that came back was human beings.
Lex Fridman
(00:59:52)
Really?
Michael Saylor
(00:59:52)
I was shocked. I couldn’t believe it, right? It says apparently if we’re just looking at mammals, the answer was human beings are the most common, which was very interesting to me. I almost didn’t believe it, but I was trying to figure out, 8 billion or so human beings-
Lex Fridman
(01:00:06)
Yeah, It’s a lot.
Michael Saylor
(01:00:07)
… there’s no other mammal that’s got more than 8 billion. If you walk through downtown Edinburgh and Scotland and you look up on this hill and this castle up on the hill and you talk to people and the story is, “Oh, yeah, well, that was a British castle. Before, it was a Scottish castle. Before, it was a pick castle. Before, it was a Roman castle. Before, it was some other Celtic castle. Before, it was…” Then they found 13 prehistoric castles buried one under the other, under the other. And you get the conclusion that 100,000 years ago, somebody showed up and grabbed the high point, the apex of the city, and they built a stronghold there and they flourished and their family flourished and their tribe flourished until someone came along and knocked them off the hill. And it’s been a nonstop never-ending fight by the aggressive, most powerful entity, family, organization, municipality, tribe, whatever-
Lex Fridman
(01:01:08)
All for the hill.
Michael Saylor
(01:01:09)
For that one hill, going back since time immemorial. And you scratch your head and you think, it seems like it’s just this never-ending-
Lex Fridman
(01:01:24)
But doesn’t that lead-
Michael Saylor
(01:01:25)
… wheel.
Lex Fridman
(01:01:25)
… if you just… all kinds of metrics that seems to improve the quality of our cannons and ships as a result. It seems that war, just like your laser eyes, focuses the mind on the engineering tasks.
Michael Saylor
(01:01:39)
It is that, and it does remind you that the winner is always the most powerful. And we throw that phrase out, but no one thinks about what that phrase means. Like who’s the most powerful or the most powerful side one, but they don’t think about it. And they think about power, energy delivered in a period of time. And then you think a guy with a spear is more powerful than someone with their fist and someone with a bow and arrow is more powerful than the person with the spear. And then you realize that somebody with bronze is more powerful than without, and steel is more powerful than bronze.

(01:02:21)
And if you look at the Romans, they persevered with artillery and they could stand off from 800 meters and blast you smithereens. You study the history of the Balearic slingers and you think we invented bullets, but they invented bullets to put in slings thousands of years ago that could have stood off 500 meters and put a hole in your head. And so there was never a time when humanity wasn’t vying to come up with an asymmetric form of projecting their own power via technology.
Lex Fridman
(01:03:02)
And absolute power is when a leader is able to control large amount of humans, they’re facing the same direction, working in the same direction to leverage energy.
Michael Saylor
(01:03:17)
The most organized society wins. When the Romans were dominating everybody, they were the most organized civilization in Europe. As long as they stayed organized, they dominated. And at some point, they over-expanded and got disorganized and they collapsed. And I guess you could say the struggle of human condition. It catalyzes the development of new technologies one after the other. Anybody that rejects ocean power gets penalized. You reject artillery, you get penalized. You reject atomic power, you get penalized. If you reject digital power, cyber power, you get penalized. And the underlying control of the property keeps shifting hands from one institution or one government to another based upon how rationally they’re able to channel that energy and how well organized or coordinated they are.
Lex Fridman
(01:04:20)
Well, that’s a really interesting thing about both the human mind and governments and companies, once they get a few good ideas, they seem to stick with them. They reject new ideas. It’s almost whether that’s emergent or however that evolved, it seems to have a really interesting effect, ’cause when you’re young, you fight for the new ideas. You push them through, then a few of us humans find success, then we get complacent. We take over the world using that new idea, and then the new young person with a better new idea challenges you. As opposed to pivoting, you stick with the old and lose because of it, and that’s how empires collapse. And it’s just both at the individual level that happens with two academics fighting about ideas or something like that. And at the human civilization level, governments. They hold on to the ideas of old. It’s fascinating.
Michael Saylor
(01:05:24)
An ever-persistent theme in the history of science is the paradigm shifts, and the paradigms shift when the old guard dies and a new generation arrives. Or the paradigm shifts when there’s a war, and everyone that disagrees with the idea of aviation finds bombs dropping on their head or everyone that disagrees with whatever your technology is has a rude awakening. And if they totally disagree, their society collapses and they’re replaced by that new thing.

Dematerializing information

Lex Fridman
(01:05:57)
A lot of the engineering you talked about had to do with ships and cannons and leveraging water. What about this whole digital thing that’s happening, been happening over the past century? Is that still engineering in your mind? You’re starting to operate in these bits of information?
Michael Saylor
(01:06:19)
I think there’s two big ideas. The first wave of ideas were digital information, and that was the internet wave been running since 1990 or so for 30 years. And the second wave is digital energy. So if I look at digital information, this idea that we want to digitally transform a book, I’m going to dematerialize every book in this room into bits and then I’m going to deliver a copy of the entire library to a billion people, and I’m going to do it for pretty much de minimis electricity, if I can dematerialize music, books, education, entertainment, maps, that is an incredibly exothermic transaction.

(01:07:14)
It’s a crystallization when we collapse into a lower energy state as a civilization and we give off massive amounts of energy. If you look at what Carnegie did, the richest man in the world created libraries everywhere at the time, and he gave away his entire fortune. And now we can give a better library to every six-year-old for nothing, and so what’s the value of giving a million books to 8 billion people? That’s the explosion in prosperity that comes from digital transformation. And when we do it with maps, I transform the map. I put it into a car. You get in the car and the car drives you where you want to go with the map. And how much better is that than a Rand McNally Atlas right here? It’s like a million times better.
Lex Fridman
(01:08:03)
Yeah.
Michael Saylor
(01:08:00)
Atlas right here, it’s like a million times better.
Lex Fridman
(01:08:03)
Yeah.
Michael Saylor
(01:08:03)
So the first wave of digital transformation was the dematerialization of all of these informational things, which were non-conservative. That is, I could take Beethoven’s 5th Symphony played for by the best orchestra in Germany and I could give it to a billion people and they could play it 1000 times each at less than the cost of the one performance, right? So I deliver culture and education and erudition and intelligence and insight to the entire civilization over digital rails. And the consequences of the human race are first order generally good, right? The world is a better place. It drives growth and you create these trillion dollar entities like Apple, and Amazon, and Facebook, and Google, and Microsoft, right? That is the first wave. The second wave,-
Lex Fridman
(01:08:58)
Do you mind? Sorry to interrupt, but that first wave, it feels like the impact that’s positive. You said the first order impact is generally positive. It feels like it’s positive in a way that nothing else in history has been positive, and then we may not actually truly be able to understand the orders and magnitude of increase in productivity and just progress and human civilization until we look back centuries from now. Or maybe, like just looking at the impact of Wikipedia.
Michael Saylor
(01:09:37)
Right.
Lex Fridman
(01:09:40)
Giving access to basic wisdom or basic knowledge and then perhaps wisdom to billions of people. If you can just linger on that for a second, what’s your sense of the impact of that?
Michael Saylor
(01:09:56)
I would say if you’re a technologist philosopher, the impact of a technology is so much greater on the civilization and the human condition than a non-technology, that it’s almost not worth your trouble to bother trying to fix things a conventional way. So let’s take example. I have a foundation, the Saylor Academy and the Saylor Academy gives away free education, free college education to anybody on earth that wants it. And we’ve had more than a million students. And if you go and you take the physics class, the lectures were by the same physics lecturer that taught me physics at MIT, except when I was at MIT, the cost of the first four weeks of MIT would’ve drained my family’s life, collective life savings for the first last 100 years.
Lex Fridman
(01:10:52)
Yeah.
Michael Saylor
(01:10:53)
100 years worth of my father, my grandfather, my great-grandfather, they saved every penny they had after 100 years, they could have paid for one week or two weeks of MIT. That’s how fiendishly expensive and inefficient it was. So I went on scholarship. I was lucky to have a scholarship, but on the other hand, I sat in the back of the 801 lecture hall and I was right up in the rafters. It’s an awful experience on these uncomfortable wooden benches and you can barely see the blackboard and you got to be there synchronously. And the stuff we upload, you can start it and stop it and watch it on your iPad or watch it on your computer and rewind it multiple times and sit in a comfortable chair and you can do it from anywhere on earth and it’s absolutely free.

(01:11:42)
So I think about this and I think you want to improve the human condition? You need people with postgraduate level education. You need PhDs, and I know this sounds kind of elitist, but you want to cure cancer and you want to go to the Stars fusion drive. We need new propulsion, right? We need extraordinary breakthroughs in every area of basic science, be it biology, or propulsion, or material science, or computer science. You’re not doing that with an undergraduate degree. You’re certainly not doing it with a high school education, but the cost of a PhD is like a million bucks. There’s like 10 million PhDs in the world. If you check it out. There’s 8 billion people in the world. How many people could get a PhD or would want to? Maybe not 8 billion, but a billion, 500 million. Let’s just say 500 million to a billion. How do you go from 10 million to a billion highly educated people, all of them specializing in, and I don’t have to tell you how many different fields of human endeavor there are. I mean, your life is interviewing these experts and there’s so many, right? It’s amazing. So how do I give a multimillion dollar education to a billion people? And there’s two choices. You can either endow a scholarship, in which case you pay $75,000 a year. Okay. 75, let’s pay a million dollars and a million dollars a person. I can do it that way. And you’re never, even if you had a trillion dollars, if you had $10 trillion to throw at the problem and we’ve just thrown $10 trillion at certain problems, you don’t solve the problem, right? If I put $10 trillion on the table and I said, educate everybody, give them all a PhD, you still wouldn’t solve the problem. Harvard University can’t educate 18,000 people simultaneously or 87,000 or 800,000 or 8 million. So you have to dematerialize the professor and dematerialize the experience. So you put it all as streaming on demand, computer generated education, and you create simulations where you need to create simulations and you upload it.

(01:14:07)
It’s like the human condition is being held back by 500,000 well-meaning average algebra teachers. I love them. I mean, please don’t take of offense if you’re an algebra teacher, but instead of 500,000 algebra teachers going through the same motion over and over again, what you need is one or five or 10 really good algebra teachers and they need to do it a billion times a day or a billion times a year for free. And if we do that, there’s no reason why you can’t give infinite education, certainly in science, technology, engineering, and math, right, infinite education to everybody with no constraint. And I think the same is true, right, with just about every other thing. If you want to bring joy to the world, you need digital music. If you want to bring enlightenment to the world, you need digital education. If you want to bring anything of consequence in the world, you got to digitally transform it and then you got to manufacture it, something like 100 times more efficiently as a start, but a million times more efficiently is probably optimal. That’s hopeful. Maybe you have a chance.

(01:15:36)
If you look at all of these space endeavors and everything, we’re thinking about getting to Mars, getting off the planet, getting to other worlds. Number one thing you got to do is you got to make a fundamental breakthrough in an engine. People dreamed about flying for thousands of years, but until the internal combustion engine, you didn’t have enough energy, enough power in a light enough package in order to solve the problem. And the human race has all sorts of those fundamental engines and materials and techniques that we need to master. And each one of them is a lifetime of experimentation, of someone capable of making a seminal contribution to the body of human knowledge.
Lex Fridman
(01:16:27)
There are certain problems like education that could be solved through this process of dematerialization. And by the way, to give props to the 500K algebra teachers, when I look at YouTube for example, one possible approach is each one of those 500,000 teachers probably had days and moments of brilliance. And if they had ability to contribute to in the natural selection process, like the market of education where the best ones rise up, that’s a really interesting way, which is the best day of your life, the best lesson you’ve ever taught could be found and sort of broadcast to billions of people. So all of those kinds of ideas can be made real in the digital world. Now, traveling across planets, you still can’t solve that problem with dematerialization. What you could solve potentially is dematerializing the human brain where you can transfer, like you don’t need to have astronauts on the ship. You can have a floppy disk carrying a human brain
Michael Saylor
(01:17:41)
Touching on those points. You’d love for the 500,000 algebra teachers to become 500,000 math specialists, and maybe they clump into 50,000 specialties as teams and they all pursue 50,000 new problems and they put their algebra teaching on autopilot.
Lex Fridman
(01:17:57)
Yeah. Yes.
Michael Saylor
(01:17:58)
That’s the same as when I give you 11 cents worth of electricity. And you don’t have to row a boat eight hours a day before you can eat. Right.
Lex Fridman
(01:18:09)
Yes.
Michael Saylor
(01:18:10)
It would be a lot better. That you would pay for your food in the first eight seconds of your day and then you could start thinking about other things. Right. With regard to technology, one thing that I learned studying technology, when you look at S-curves, is until you start the S-curve, you don’t know whether you’re 100 from viability, 1000 years from viability or a few months from viability. So,-
Lex Fridman
(01:18:42)
Isn’t that fun? That’s so fun. The early part of the S-curve is so fun because you don’t know.
Michael Saylor
(01:18:50)
In 1900 you could have got any number of learned academics to give you 10,000 reasons why humans will never fly.
Lex Fridman
(01:18:58)
Yeah.
Michael Saylor
(01:18:58)
Right. And in 1903, the Wright brothers flew, and by 1969 we’re walking on the moon. So the advance that we made in that field was extraordinary. But for the 100 years and 200 years before, they were just back and forth and nobody was close. And that’s the happy part. The happy part is we went from flying 20 miles an hour or whatever to flying 25,000 miles an hour in 66 years. The unhappy part is I studied aeronautical engineering at MIT in the 80s. And in the 80s we had Gulfstream aircraft, we had Boeing 737s, we had the space shuttle. And you fast-forward 40 years and we pretty much had the same exact aircraft. The efficiency of the engines was 20, 30% more.
Lex Fridman
(01:19:55)
Yeah.
Michael Saylor
(01:19:55)
Right. We slammed into a brick wall around 69 to 75. In fact, the Global Express, the Gulfstream, these were all engineered in the 70s, some in the 60s. The fuselage silhouette of a Gulfstream of a G5 was the same shape as a G4 is the same shape as a G3, is the same shape as a G2. And that’s because they were afraid to change the shape for 40 years because they worked it out in a wind tunnel. They knew it worked. And when they finally decided to change the shape, it was like a $10 billion exercise with modern supercomputers and computational fluid dynamics.
Lex Fridman
(01:20:40)
Why was it so hard? What is that wall made of that you slammed into?
Michael Saylor
(01:20:46)
The right question is, so why does the guy that went to MIT that got an aeronautical engineering degree, spent his career in software? Why is it that I never a day in my life with the exception of some Air Force Reserve work, I never got paid to be an aeronautical engineer, and I worked in software engineering my entire career.
Lex Fridman
(01:21:03)
Well, maybe software engineering is the new aeronautical engineering in some way. Maybe you hit fundamental walls until you have to return to it centuries later, or no.
Michael Saylor
(01:21:17)
The National Gallery of Art was endowed by a very rich man, Andrew Mellon, and you know how he made his money? Aluminum. Okay. And you know what kind of airplanes you can create without aluminum? Nothing. Nothing, right?
Lex Fridman
(01:21:37)
So it’s a materialist problem.
Michael Saylor
(01:21:39)
Okay. So 1900, we made massive advances in metallurgy, right? I mean, that was US Steel, that was iron to steel, aluminum, massive fortunes were created because this was a massive technical advance. And then we also had the internal combustion engine and the story of Ford and General Motors and DaimlerChrysler and the like is informed by that. So you have no jet engines, no rocket motors, no internal combustion engines, you have no aviation. But even if you had those engines, if you were trying to build those things with steel, no chance. You had to have aluminum. So there’s two pretty basic technologies, and once you have those two technologies, stuff happens very fast. So tell me the last big advance in jet engines. There hasn’t been one there. The last big advance in rocket engines. Hasn’t been one. The big advances in spaceship design, from what I can see are in the control systems, the gyros and the ability to land, right, in a stable fashion. That’s pretty amazing, landing a rocket.
Lex Fridman
(01:22:53)
Also in the, at least according to the Elon and so on, the manufacture of more efficient and less expensive manufacturer of rockets. So it’s a production, whatever that you call that discipline of at scale manufacture, at scale production. So factory work, but it’s not 10X. I mean maybe it’s 10X over a period of a few decades.
Michael Saylor
(01:23:18)
When we figure out how to operate a spaceship on the water in your water bottle for a year.
Lex Fridman
(01:23:26)
Yeah.
Michael Saylor
(01:23:27)
Right. Now, then you’ve got a breakthrough. So the bottom line is propulsion technology, propellants, and the materials technology, they were critical to getting on that aviation S-curve. And then we slammed into a wall in the 70s and the Boeing 747, the Global Express, the Gulfstream, these things were, the space shuttle, they were all pretty much reflective of that. And then we stopped. And at that point, you have to switch to a new S-curve. So the next equivalent to the internal combustion engine was the CPU, and the next aluminum equivalent was silicon.

(01:24:07)
So when we actually started developing CPUs, transistor gave way to CPUs. And if you look at the power, right, the bandwidth that we had on computers and Moore’s law, right? What if the efficiency of jet engines had doubled every three years, right, in the last 40 years where we be right now? Right. So I think that if you’re a business person, if you’re looking for a commercially viable application of your mind, then you have to find that S-curve. And ideally you have to find it in the first five, six, 10 years. But people always miss this. Let’s take Google Glass, right? Google Glass was an idea 2013. The year is 2022. And people were quite sure this was going to be a big thing but,-
Lex Fridman
(01:25:03)
And it could have been at the beginning of the S-curve.
Michael Saylor
(01:25:07)
But fundamentally, we didn’t really have an effective mechanism. I mean, people getting vertigo and their,-
Lex Fridman
(01:25:14)
But you didn’t know that at the beginning of the S-curve, right? I mean, maybe some people had a deep intuition about the fundamentals of augmented reality, but you don’t know that. You don’t have those, you’re looking through the fog. You don’t know.
Michael Saylor
(01:25:28)
So the point is, we’re year zero in 2013, and we’re still year zero in 2022 on that augmented reality. And when somebody puts out a set of glasses that you can wear comfortably without getting vertigo, right, without any disorientation that managed to have the stability and the bandwidth necessary to sync with the real world, you’ll be in year one. And from that point, you’ll have a 70 year or some interesting future until you slam into a limit to growth.
Lex Fridman
(01:26:03)
Yeah.
Michael Saylor
(01:26:04)
And then it’ll slow down. And this is the story of a lot of things, right? I mean, John D. Rockefeller got in the oil business in the 1860s, and the oil business as we understood it became fairly mature by the 1920s to 30s. And then it actually stayed that way until we got to fracking, which was like seven years later, and then it burst forward, so.
Lex Fridman
(01:26:33)
The interesting story about Moore’s law though is that you get this constant burst of S-curves, on top of S-curves, on top of S-curves. It’s like the moment you start slowing down or almost ahead of you slowing down, you come up with another innovation, another innovation. So Moore’s law doesn’t seem to happen in every technological advancement. It seems like you only get a couple of S-curves and then you’re done for a bit. So I wonder what the pressures there are that resulted in such success over several decades and still going.
Michael Saylor
(01:27:07)
Humility dictates that nobody knows when the S-curve kicks off, and you could be 20 years early or 100 years early. Leonardo da Vinci, Michelangelo, they were designing flying machines hundreds and hundreds of years ago. So humility says you’re not quite sure when you really hit that commercial viability. And it also dictates you don’t know when it ends. When will the party stop? When will Moore’s law stop and we’ll get to the point where they’re exponentially diminishing returns on silicon performance and just like we got exponentially diminishing returns on jet engines, and it just takes an exponential increase in effort to make it 10% better, but while you’re in the middle of it, then you know can do things.

(01:28:01)
So the reason that the digital revolution is so important is because the underlying platforms, the bandwidth and the performance of the components, and I say the components are the radio protocols, mobile protocols, the batteries, the CPUs, and the displays. Right. Those four components are pretty critical. They’re all critical in the creation of an iPhone. I wrote about it in the book, The Mobile Wave, and they catalyzed this entire mobile revolution. Because they have advanced and continue to advance, they created the very fertile environment for all these transformations. And the digital transformations themselves, right, they call for creativity in their own. Right.

(01:28:59)
I think the interesting thing about let’s take digital maps. Right. When you conceptualize something as a dematerialized map, right, it becomes a map because I can put it on a display like an iPad or I can put it in a car like a Tesla. But if you really want to figure it out, you can’t think like an engineer. You need to think like a fantasy writer. This is where it’s useful if you played Dungeons and Dragons and you read Lord of the Rings and you studied all the fantasy literature, because when I dematerialize the map, first I put 10 million pages of satellite imagery into the map. Right.

(01:29:43)
That’s a simple physical transform. But then I start to put telemetry into the map, and I keep track of the traffic rates on the roads, and I tell you whether you’ll be in a traffic jam if you drive that way, and I tell you which way to drive. And then I start to get feedback on where you’re going. And I tell you, the restaurant’s closed and people don’t like it anyway. And then I put an AI on top of it and I have it drive your car for you. And eventually the implication of digital transformation of maps is I get into a self-driving car and I say, take me someplace cool where I can eat.
Lex Fridman
(01:30:20)
Yeah.
Michael Saylor
(01:30:20)
Right. And how did you get to that last step? Right. It wasn’t simple engineering. There’s a bit of fantasy in there, a bit of magic.
Lex Fridman
(01:30:30)
Design, art, whatever the heck you call it, it’s whatever. Yeah. Fantasy injects magic into the engineering process. Imagination precedes great revolutions in engineering. It’s like imagining a world, like of what you can do with the display. How will the interaction be? That’s where Google Glass actually came in, augmented reality, virtual reality, people are playing in the space of sci-fi, imagination.
Michael Saylor
(01:31:00)
They called it a moonshot. They tried, it didn’t work, but to their credit, they stopped trying.
Lex Fridman
(01:31:05)
And then there’s new people. They keep dreaming. Dreamers all around us. I love those dreamers. And most of them fail and suffer because of it, but some of them win Nobel Prizes or become billionaires.
Michael Saylor
(01:31:18)
Well, what I would say is if half the civilization dropped what they were doing tomorrow and eagerly started working on launching a rocket to Alpha Centauri, it might not be the best use of our resources because it’s kind of like if half of Athens in the year 500 BC eagerly started working on flying machines. If you went back and you said, what advice would you give them, you would say, it’s not going to work until you get to aluminum. And you’re not going to get to aluminum until you work out the steel and certain other things. And you’re not going to get to that until you work out the calculus of variations and some metallurgy. And there’s a dude Newton that won’t come along for quite a while and he’s going to give you the calculus to do it. And until then, it’s hopeless.

(01:32:09)
So you might be better off to work on the aqueduct or to focus upon sales or something. So if I look at this today, I say there’s massive profound civilization advances to be made through digital transformation of information. And you can see them. This is not the story of today, right? It’s 10 years old, what we’ve been seeing.
Lex Fridman
(01:32:36)
We’re living through different manifestations of that story today too though, like social media, the effects of that is very interesting because ideas spread even, you talk about velocity of money, the velocity of ideas keeps increasing.
Michael Saylor
(01:32:51)
Yeah.
Lex Fridman
(01:32:52)
So Wikipedia is a passive store. It’s a store of knowledge. Twitter is like a water hose or something. It’s like spraying you with knowledge whether you want it or not. It’s like social media is just like this explosion of ideas. And then we pick them up and then we try to understand ourselves because the drama of it also plays with our human psyche. So sometimes there’s more ability for misinformation, for propaganda to take hold. So we get to learn about ourselves, we get to learn about the technology that can decelerate the propaganda, for example, all that kind of stuff. But the reality is we’re living, I feel like we’re living through a singularity in the digital information space, and we don’t have a great understanding of exactly how it’s transforming our lives.
Michael Saylor
(01:33:43)
And this is where money is useful as a metaphor for significance. Because if money is the economic energy of the civilization, then something that’s extraordinarily lucrative that’s going to generate a monetary or a wealth increase is a way to increase the net energy and the civilization. And ultimately, if we had 10 times as much of everything, we’d have a lot more free resources to pursue all of our advanced scientific and mathematical and theoretical endeavors. So let’s take Twitter. Right. Twitter’s something that could be 10 times more valuable than it is. Right. Twitter could be made 10 times better.
Lex Fridman
(01:34:27)
Oh, by the way, I should say that people should follow you on Twitter. Your Twitter account is awesome.
Michael Saylor
(01:34:30)
Thank you. Thank you.
Lex Fridman
(01:34:32)
It could be made 10 times better. Yeah.
Michael Saylor
(01:34:34)
Yeah, Twitter can be made 10 times better. If we take YouTube or take education, we could generate a billion PhDs. And the question is, do you need any profound breakthrough in materials or technology to do that? The answer is not really. Right. So if you want to, you could make Apple, Amazon, Facebook, Google, Twitter, all these things better. The United States government, if they took 1% of the money they spend on the Department of Education and they simply poured it into digital education and they gave degrees to people that actually met those requirements, they could provide 100X as much education for one 100th of the cost, and they could do it with no new technology. That’s a marketing and political challenge.

(01:35:30)
So I don’t think every objective is equally practical. And I think the benefit of being an engineer or thinking about practical achievements is when the government pursues an impractical objective or when anybody, an entrepreneur, not so bad with an entrepreneur because they don’t have that much money to waste. When a government pursues an impractical objective, they squander trillions and trillions of dollars and achieve nothing. Whereas if they pursue a practical objective or if they simply get out of the way and do nothing and they allow the free market to pursue the practical objectives, then I think you can have profound impact on the human civilization.

(01:36:20)
And if I look at the world we’re in today, I think that there are multi- trillion 10, 20, $50 trillion worth of opportunities in the digital information realm yet to be obtained. But there’s hundreds of trillions of dollars of opportunities in the digital energy realm that not only are they not obtained, the majority of people don’t even know what digital energy is. Most of them would reject the concept. They’re not looking for it. They’re not expecting to find it. It’s inconceivable because it is a paradigm shift, but in fact, it’s completely practical. Right under our nose. It’s staring at us, and it could make the entire civilization work dramatically better in every respect.

Digital energy and assets

Lex Fridman
(01:37:18)
So you mentioned in the digital world, digital information is one, digital energy is two, and the possible impact on the world and the set of opportunities available in the digital energy space is much greater. So how do you think about digital energy? What is it?
Michael Saylor
(01:37:41)
So I’ll start with Tesla. He had a very famous quote. He said, “If you want to understand the universe, think in terms of energy, vibration< and frequency." And it gets you thinking about what is the universe? And of course, the universe is just all energy. And then what is matter? Matter is low frequency energy. And what are we? We're vibrating, ashes to ashes, dust to dust. I can turn a tree into light. I can turn light back into a tree. If I consider the entire universe, and it's very important because we don't really think this way. Let's take the New York disco model. If I walk into a nightclub and there's loud music blaring in New York City, what's really going on there? Right. If you blast out 14 billion years ago, the universe is formed. Okay, that's a low frequency thing. The universe. Four and a half billion years ago, the sun, maybe the earth are formed. The continents are 400 million years old. The shift that New York City is on is some hundreds of millions of years, but the Hudson River is only 20,000 years.

(01:38:58)
There’s a building that’s probably 50 years old. There’s a company operating that disco or that club, which is five to 10 years old. There’s a person, a customer walking in there for an experience for a few hours. There’s music that’s oscillating at some kilohertz, and then there’s light.
Lex Fridman
(01:39:20)
Right.
Michael Saylor
(01:39:20)
And you have all forms of energy, all frequencies, right, all layered, all moving through different medium. And how you perceive the world is the question of at what frequency do you want to perceive the world? And I think that once you start to think that way, you’re catalyzed to think about what would digital energy look like and why would I want it? And what is it? So why don’t we just start right there. What is it? The most famous manifestation of digital energy is Bitcoin. Bitcoin’s a crypto asset. It’s a crypto asset that has monetary value.
Lex Fridman
(01:40:08)
Can we just linger on that? Bitcoin is digital asset that has monetary value. What is a digital asset? What is monetary? Why use those terms versus the words of money and currency? Is there something interesting in that disambiguation of different terms?
Michael Saylor
(01:40:30)
I’d call it a crypto asset network. The goal is to create a billion dollar block of pure energy in cyberspace, one that I could then move with no friction at the speed of light. Right. It’s the equivalent to putting a million pounds in orbit. How do I actually launch something into orbit? Right. How do I launch something into cyberspace such that it moves friction free? And the solution is a decentralized proof-of-work network. Right. Satoshi’s solution was, I’m going to establish protocol running on a distributed set of computers that will maintain a constant supply of never more than 21 million Bitcoins subdividable by 100 million Satoshis each transferable via transferring private keys. Now, the innovation is to create that in a ethical, durable fashion. Right. The ethical innovation is I want it to be property and not a security. A bushel of corn, an acre of land, a stack of lumber, and a bar of gold and a Bitcoin are all property. And that means they’re all commonly occurring elements in the world.
Michael Saylor
(01:42:00)
… they’re all commonly occurring elements in the world. You could call them commodities, but commodity is a little bit misleading, and I’ll tell you why in a second. But they’re all distinguished by the fact that no one entity or person or government controls them. If you have a barrel of oil and you’re in Ukraine versus Russia versus Saudi, Arabia versus the US, you have a barrel of oil, right? And it doesn’t matter what the premier in Japan or the mayor of Miami Beach thinks about your barrel of oil, they cannot wave their hand and make it not a barrel of oil or a cord of wood. And so property is just a naturally occurring element in the universe.
Lex Fridman
(01:42:49)
Why use the word ethical? And sorry, I may interrupt occasionally. Why ethical assigned to property?
Michael Saylor
(01:42:58)
Because if it’s a security, a security would be an example of a share of a stock or a crypto token controlled by a small team. And in the event that something is a security because some small group or some identifiable group can control its nature, character, supply, then it really only becomes ethical to promote it or sell it pursuant to fair disclosures. So, I’ll give you maybe practical example. I’m the mayor of Chicago. I give a speech. In my speech, I’ll say, “I think everybody in Chicago should own their own farm and have a chicken in the backyard and their own horse and an automobile.” That’s ethical. I give the same speech and I say, “I think everybody in Chicago should buy Twitter stock. Sell their house or sell their cash and buy Twitter stock.” Is that ethical? Not really. But at that point you’ve entered into a conflict of interest because what you’re doing is you’re promoting an asset which is substantially controlled by a small group of people, the board of directors or the CEO of the company.

(01:44:18)
So, how would you feel if the president of the United States said, “I really think Americans should all buy Apple stock,” especially if you worked at Google. But if you worked anywhere, you’d be like, “Why isn’t he saying buy mine?” Right? A security is a proprietary asset in some way, shape or form. And the whole nature of securities law, it starts from this ancient idea, thou shalt not lie, cheat or steal. Okay? If I’m going to sell you securities or I’m going to promote securities as a public figure or as an influencer or anybody else. If I create my own Yo-Yo coin or Mikey coin, and then there’s a million of them, and I tell you that I think that it’s a really good thing, and Mikey coin will go up forever and everybody buys Mikey coin and then I give 10 million to you and don’t tell the public, I’ve cheated them.

(01:45:22)
Maybe if I have Mikey coin and I think there’s only 2 million Mikey coin, and I swear to you there’s only 2 million, and then I get married and I have three kids and my third kid is in the hospital and my kid’s going to die and I have this ethical reason to print 500,000 more Mikey coin or else people are going to die, and everybody tells me it’s fine, I’ve still abused the investor, right? It’s an ethical challenge. If you look at ethics laws everywhere in the world, they all boil down to having a clause which says that if you’re a public figure, you can’t endorse a security. You can’t endorse something that would cause you to have a conflict of interest.

(01:46:08)
So, if you’re a mayor, a governor, a country, a public figure, an influencer, and you want to promote or promulgate or support something using any public influence or funds or resources you may have, it needs to be property. It can’t be security. So, it goes beyond that, right? I mean, would the Chinese want to support an American company? As soon as you look at what’s in the best interest of the human race, the civilization, you realize that if you want an ethical path forward, it needs to be based on common property, which is fair. And the way you get to a common property is through an open permissionless protocol. If it’s not open, if it’s proprietary and I know what the code says and you don’t know what the code says, that makes it a security.

(01:47:05)
If it’s permissioned, you’re not allowed on my network. Or if you can be censored or booted off my network, that also makes it a security. When I talk about property, I mean the challenge here is how do I create something that’s equivalent to a barrel of oil in cyberspace? And that means it has to be a non-sovereign bearer instrument, open, permissionless, not censorable, right? If I could do that, then I could deliver you 10,000 dematerialized barrels of oil and you would take settlement of them and you would know that you have possession of that property, irregardless of the opinion of any politician or any company or anybody else in the world.

(01:48:05)
That’s a really critical characteristic. And it actually is, it’s probably one of the fundamental things that makes Bitcoin special. Bitcoin isn’t just a crypto-asset network. It’s easy to create a crypto-asset network. It’s very hard to create an ethical crypto-asset network because you have to create one without any government or corporation or investor exercising into influence to make it successful.

Oil barrel vs Bitcoin

Lex Fridman
(01:48:37)
So open, permissionless, noncensorable. So basically no way for you without explicitly saying so, outsourcing control to somebody else. So it’s a kind of, you have full control. Even with a barrel of oil, what’s the difference between a barrel of oil and a Bitcoin to you? Because you kind of mentioned that both are property. You mentioned Russia and China and so on. Is it the ability of the government to confiscate? In the end, governments can probably confiscate no matter what the asset is, but you want to lessen the effort involved.
Michael Saylor
(01:49:21)
And barrel oil is a bucket of physical property. Liquid property.
Lex Fridman
(01:49:27)
That’s very [inaudible 01:49:27].
Michael Saylor
(01:49:27)
And Bitcoin is a digital property.
Lex Fridman
(01:49:27)
But it’s easier to confiscate a barrel of oil.
Michael Saylor
(01:49:32)
It’s easier to confiscate things in the real world than things in cyberspace, much easier.
Lex Fridman
(01:49:38)
So, that’s not universally true. Some things in the digital space are actually easier to confiscate because just the nature of how things move easily with information, right?
Michael Saylor
(01:49:50)
I think in the Bitcoin world, what we would say is that Bitcoin is the most difficult property that the human race possesses or has yet invented to confiscate. And that’s by virtue of the fact that you could take possession of it via your private keys. So, if you’ve got your 12 seed phrases in your head, then that would be the highest form of property, right? Because I literally have to crack your head open and read your mind to take it. It doesn’t mean I couldn’t extract it from you under duress, but it means that it’s harder than every other thing you might own. In fact, it’s exponentially harder.

(01:50:29)
If you consider every other thing you might own. A car, a house, a share of stock, gold, diamonds, property rights, intellectual property rights, movie rights, music rights. Anything imaginable, they would all be easier by orders and orders of magnitude to seize. So, digital property in the form of a set of private keys is by far the apex property of the human race. In terms of ethics, I want to make one more point. It’s like I might say to you, “Lex, I think Bitcoin is the best, most secure, most durable crypto asset network in the world, it’s going to go up forever and there’s nothing better in the world.

(01:51:11)
I might be right, I might be wrong, but the point is because it’s property, it’s ethical for me to say that. If I were to turn around and say, “Lex, I think the same about MicroStrategy stock, MSTR, that’s a security. Okay? If I’m wrong about that, I have civil liability or other liability because I could go to a board meeting tomorrow and I could actually propose we issue a million more shares of MicroStrategy stock. Whereas the thing that makes Bitcoin ethical for me to even promote is the knowledge that I can’t change it. If I knew that I could make it 42 million instead of 21 million and I had the button back here, then I have a different degree of ethical responsibility.

(01:52:05)
Now, I could tell you your life will be better if you buy Bitcoin, and it might not. You might go buy Bitcoin, you might lose the keys and be bankrupt and your life ends and your life is not better because you bought Bitcoin. But it wouldn’t be my ethical liability any more than if I were to say, “Lex, I think you ought to get a farm. I think you should be a farmer. I think a chicken in every pot, you should get a horse. I think you’d be better.” I mean, they’re all opinions expressed about property, which may or may not be right that you may or may not agree with. But in a legal sense, if we read the law, if we understand securities law… And I would say most people in the crypto industry, they didn’t take companies public and so they’re not really focused on the securities law. They don’t even know the securities law.

(01:52:58)
If you focus on the securities law, that would say you just can’t legally sell this stuff to the general public or promote it without a full set of continuing disclosures signed off on by a regulator. So, there’s a fairly bright line there with regard to securities, but when you get to the secondary issue, it’s how do you actually build a world based on digital property if public figures can’t embrace it or endorse it? You see? So, you’re not going to build a better world based upon Twitter stock, if that’s your idea of property, because Twitter stock is a security, and Twitter stock is never going to be a non-sovereign bearer instrument in Russia, right? Or in China, it’s not even legal in China.

(01:53:55)
It’s not a global permissionless, open thing. It will never be trusted by the rest of the world, and legally it’s impractical. But would you really want to put a hundred trillion dollars worth of economic value on Twitter stock if there’s a board of directors and a CEO that could just get up and take half of it tomorrow? The answer is no. So, if you want to build a better world based on digital energy, you need to start with constructing a digital property, and I’m using property here-
Lex Fridman
(01:54:28)
Open, permissionless, [inaudible 01:54:30]-
Michael Saylor
(01:54:30)
In the legal sense, but I would also go to the next step and say property is low frequency money. So, if I give you a million dollars and you want to hold it for a decade, you might go buy a house with it and the house is low frequency money. You converted the million dollars of economic energy into a structure called a house. Maybe after a decade you might convert it back into energy. You might sell the house for currency and it’ll be worth more or less depending upon the monetary climate you sell in.
Lex Fridman
(01:55:08)
The frequency means what here? How quickly it changes state>
Michael Saylor
(01:55:13)
How quickly does something vibrate? If I transfer $10 from me to you for a drink, and then you turn around and you buy another, right? We’re vibrating on a frequency of every few hours. The energy is changing hands, but it’s not likely that you sell and buy houses every few hours. The frequency of a transaction in real estate is every 10 years, every five years. It’s much lower frequency transaction. And so when you think about what’s going on here, you have extremely low frequency things, which we’ll call property. Then you have mid-frequency things. I’m going to call them money or currency. And then you have high frequency, and that’s energy.

(01:56:09)
And that’s why I use the illustration of you got the building, you got the light and you got the sound, and they’re all just energy moving at different frequencies. Now, Bitcoin is magical and it is truly the innovation. It’s like a singularity because it represents the first time in the history of the human race that we managed to create a digital property, properly understood. It’s easy to create something digital, right? Every coupon and every scan on Fortnite and Roblox and Apple TV credits and all these things, they’re all digital something, but they’re securities, right?

(01:56:53)
Shares of stock are securities. Whenever anybody transfers, when you transfer money on PayPal or Apple Pay, you’re transferring in essence, a security or an IOU. So, transferring a bearer instrument with final settlement in the internet domain or in cyberspace, that’s a critical thing. And anybody in the crypto world can do that. All the cryptos can do that. But what they can’t do, what 99% of them fail to do is be property. They’re securities.
Lex Fridman
(01:57:27)
Well, there’s a line there I’d like to explore a little further. For example, what about when you… Like Coinbase or something like that, when there’s an exchange that you buy Bitcoin in, you start to move away from this kind of, some of the aspects that you said makes up a property, which is this noncensorable and permissionless and open. So, in order to achieve the convenience, the effectiveness of the transfer of energy, you have to leverage some of these [inaudible 01:58:10] that remove the aspects of property. So, maybe you can comment on that.

Layers of Bitcoin

Michael Saylor
(01:58:14)
Let me give you a good model for that. If you think about the layer one of Bitcoin, the layer one is the property settlement layer, and we’re going to do 350,000 transactions or less a day, a hundred million transactions a year is the bandwidth on the layer one. And it would be an ideal layer of one to move a billion dollars from point A to point B with a massive security. The role of the layer one is two things. One thing is I want to move a large sum of money through space with security. I can move any amount of Bitcoin in a matter of minutes for dollars on layer one.

(01:58:59)
The second important feature of the layer one is I need the money to last forever. I need the money indestructible, immortal. So, the bigger trick is not to move a billion dollars from here to Tokyo. The big trick is to move a billion dollars from here to the year 2140. And that’s what we want to solve with layer one. And the best real metaphor in New York City would be the granite or the schist. What you want is a city block of a bedrock. And how long has it been there? Millions of years it’s been there. And how fast do you want it to move? You don’t. In fact, the single thing that’s most important is that it not deflect. If it deflects a foot in a hundred years, it’s too much. If it deflects an inch in a hundred years, you might not want that.

(01:59:53)
So, the layer one of Bitcoin is a foundation upon which you put weight. How much weight can you put on it? You put a trillion, 10 trillion, a hundred trillion, a quadrillion. How much weight’s on the bedrock in Manhattan, right? Think about hundred story buildings. So, the real key there is the foundational asset needs to be there at all. The fact that you can create a hundred trillion dollars layer one that would stand for a hundred years, that is the revolutionary breakthrough first time.

(02:00:27)
And the fact that it’s ethical, right? It’s ethical and it’s common property, global, permissionless. Extremely unlikely that would happen. People tried 50 times before and they all failed. They tried 15,000 times after, and they’ve all been… They’ve all generally failed. 98% have failed and a couple have been less successful. But for the most part, that’s an extraordinary thing. Now-
Lex Fridman
(02:00:54)
Just really quickly pause, just to define some terms. If maybe people don’t know, layer one that Michael’s referring to is in general what people know of as the Bitcoin technology originally defined. Which is there’s blockchain, there’s a consensus mechanism of proof of work, low number of transactions, but you can move a very large amount of money.

(02:01:22)
The reason he’s using the term layer one is now that there’s a lot of ideas of layer two technologies that built on top of this bedrock that allow you to move a much larger number of transactions, sort of higher frequency, I don’t know how would terminology want to use, but basically be able to use now something that is based on Bitcoin to then buy stuff, be a consumer, to transfer money, to use it as currency. Just to define some terms.
Michael Saylor
(02:01:54)
Yeah. So, the layer one is the foundation for the entire cyber economy, and we don’t want it to move fast. What we want is immortality. Immortal, incorruptible, indestructible. That’s what you want, integrity from the layer one. Now there’s layer two and layer three and layer two I would define as an open, permissionless, non-custodial protocol that uses the underlying layer one token as its gas fee.
Lex Fridman
(02:02:34)
So, what’s custodial mean and how does the different markets… Like is Lightning network-
Michael Saylor
(02:02:40)
So, Lightning Network would be an example of a layer two, non-custodial. The Lightning Network will sit on top of layer one. It’ll sit on top of Bitcoin and it solves… What you want to do is solve the problem of, “It’s well and fine. I don’t want to move a billion dollars every day. What I want to move is $5 a billion times a day.” So, if I want to move $5 a billion times a day, I don’t really need to put the entire trillion dollars of assets at risk every time I move $5. All I really need to do is put a hundred thousand dollars in a channel or a million dollars in a channel, and then I do 10 million transactions where I have a million dollars at risk.

(02:03:27)
And of course, it’s kind of simple. If I lower my security requirement by a factor of a million, I can probably move the stuff a million times faster. And that’s how Lightning works. It’s non-custodial because there’s no corporation or custodian or counterparty you’re trusting, right? There’s the risk of moving through the channel. But Lightning is an example of how I go from 350,000 transactions a day to 350 million transactions a day. So, on that layer two, you could move the Bitcoin in seconds for fractions of pennies.

(02:04:08)
Now, that’s not the end all, be all because the truth is there are a lot of open protocols. Lightning probably won’t be the only one. There’s an open market competition of other permissionless, open source protocols to do this work. And in theory, any other crypto network that was deemed to be property, deemed to be non-security, you could also think of as potentially a layer two to Bitcoin. There’s a debate about are there any and what are they? And we could leave that for a later time.
Lex Fridman
(02:04:43)
But why do you think of them as layer two as opposed to contending for layer one?
Michael Saylor
(02:04:50)
Yeah, actually, if they’re using their own token, then they are a layer one. If you create an open protocol that uses the Bitcoin token as the fee, then it becomes a layer two. Bitcoin itself incentivizes its own transactions with its own token, and that’s what makes it layer one.
Lex Fridman
(02:05:11)
Okay, what’s layer three then?
Michael Saylor
(02:05:13)
Layer three is a custodial layer. So, if you want to move Bitcoin in milliseconds for free, you move it through Binance or Coinbase or Cash App. This is a very straightforward thing. I mean, it seems pretty obvious when you think about it that there are going to be hundreds of thousands of layer threes. There may be dozens of layer twos. I mean, Lightning is a one, but it’s not the only one, anybody can invent something. And we can have this debate about custodial, non-custodial.
Lex Fridman
(02:05:50)
Don’t you think there’s a monopolization possibilities at layer three. You mentioned Binance, Coinbase. What if they start to dominate and basically everybody’s using them practically speaking, and then it becomes too costly to memorize the private key in your brain or like a cold storage of layer one technology.
Michael Saylor
(02:06:19)
The idealists fear the layer threes because they think… And especially they detest, they would detest a bit… There’s almost like a layer four, by the way, if you want to. A layer four would be, I’ve got Bitcoin on an application, but I can’t withdraw it. So, I’ve got an application that’s backed by Bitcoin, but the Bitcoin is sealed. It’s a proprietary example, and I’ll give you an example of that. That would be like Grayscale. If I own a share of GBTC, so I own a security. Actually, you could own MSTR. If you own a security or you own a product that has Bitcoin embedded in it, you get the benefits of Bitcoin, but you don’t have the ability to withdraw the asset.
Lex Fridman
(02:07:07)
To get on the security market at layer four? Am I understanding this correctly?
Michael Saylor
(02:07:12)
I don’t know if I would say… Not all securities are layer four, but anything that’s a proprietary product with Bitcoin embedded in it where you can’t withdraw the Bitcoin is another application of Bitcoin. If you think about different ways you can use this, you can either stay completely on the layer one and use the base chain for your transactions, or you can limit yourself to layer one and layer two, Lightening. And the purist would say, “We stay there, get your Bitcoin off the exchange.” But you could also go to the layer three.

(02:07:50)
When Cash App supported Bitcoin, they made it very easy to buy it, and then they gave you the ability to withdraw. When PayPal or I think Robinhood let you buy it, they wouldn’t let you withdraw it and it was a big community uproar and people, they want these layer threes to make it possible to withdraw the Bitcoins. You can take it to your own private wallet and get it off the exchange. I think the answer to the question of, “Well, is corruption possible?” Is corruption is possible in all human institutions and all governments everywhere. The difference between digital property and physical property is when you own a building in Los Angeles and the city politics turn against you, you can’t move the building.

(02:08:36)
And when you own a share of a security that’s like a US traded security and you wish to move to some other country, you can’t take the security with you either. And when you own a bunch of gold and you try to get through the airport, they might not let you take it. So, Bitcoin is advantageous versus all those because you actually do have the option to withdraw your asset from the exchange. If you had Bitcoin with Fidelity and you had shares of stock with Fidelity, and if you had bonds and sovereign debt with Fidelity, and if you own some mutual funds and some other random limited partnerships with Fidelity, none of those things can be removed from the custodian. But the Bitcoin, you can take off the exchange, you can remove from the custodian.
Lex Fridman
(02:09:35)
It’s still possible though-
Michael Saylor
(02:09:36)
There’s a deterrent.
Lex Fridman
(02:09:36)
Yes.
Michael Saylor
(02:09:37)
There’s a deterrent. That’s an anti-corrupting element. And the phrase is, “An armed society is a polite society,” right? Because you have the optionality to withdraw all your assets from the crypto exchange, you can enforce fairness. And at the point where you disagree with their policies, you can within an hour move your assets to another counterparty or take personal custody of those assets and you don’t have that option with most other forms of property. You don’t have as much optionality with any other form of property on earth. And so what makes digital property distinct is the fact that it has the most optionality for custody.

(02:10:23)
Now, coming back to this digital energy issue, the real key point is the energy moves in milliseconds for free on layer threes. It moves in seconds or less than seconds on layer twos, it moves in minutes on the layer one. And I don’t think it makes any sense to even think about trying to solve all three problems on the layer one because it’s impossible to achieve the security and the incorruptibility and immortality if you try to build that much speed and that functionality and performance.

(02:10:58)
In fact, if you come back to the New York model, you really wanted a block of granite, a building and a company. That’s what makes the economy right? If I said to you, “You’re going to build a building, but you can only have one company in it for the life of the building,” it would be very fragile, very brittle. What company a hundred years ago is still relevant today? You want all three layers because they all oscillate at different frequencies. And there’s a tendency to think, “Well, it’s got to be this L1 or that L1.” Not really. And sometimes people think, “Well, I don’t really want any L3.”

(02:11:38)
But companies, it’s not an even/or. Companies are better than crypto asset networks at certain things. If you want complexity, you want to implement complexity or you want to implement compliance or customer service. Companies do these things well. We know you couldn’t decentralize Apple or Netflix or even YouTube. The performance wouldn’t be there and the subtlety wouldn’t be there. And you can’t really legally decentralize certain forms of banking and insurance because they would become illegal in the political jurisdiction they’re in.

(02:12:19)
So, unless you’re a crypto anarchist and you believe in no companies and no nation states, which is just not very practical, not anytime soon. Once you allow that nation states will continue and companies have a role, then the layered architecture follows, and the free market determines who wins. For example, there are layer threes that let you acquire Bitcoin and withdraw Bitcoin. There are other applications that let you acquire but not withdraw it, and they don’t get the same market share, but they might give you some other advantage. There are certain layer threes, like Jack Dorsey’s Cash App where they just incorporated Lightning, an implementation of it.
Lex Fridman
(02:13:15)
Into Cash App?
Michael Saylor
(02:13:16)
So, that makes it advantageous versus an application that doesn’t incorporate Lightning. If you think about the big picture, the big picture is 8 billion people with mobile phones served by a hundred million companies doing billions of transactions an hour. And the companies are settling with each other on the base layer in blocks of 80 million at a time. And then the companies are trading with the consumers in proprietary layers, like layer three. And then on occasion, people are shuffling assets across custodians with Lightning layer two, because you don’t want to pay $5 to move $50. You want to pay a 20th of a penny to move $50.

(02:14:10)
And so all of these things create efficiency in the economy. And Lex, if you want to consider how much efficiency. If you gave me a billion dollars in 20 years, I couldn’t find a way to trade with another company or a counterparty in Nigeria. No amount of money. Give me %10 billion, I couldn’t do it because you get shut down at the banking level. You can’t link up a bank in Nigeria with a bank in the US. You get shut down at this credit card level because they don’t have the credit card, so they won’t clear. You get shut down at the compliance FCPA level because you wouldn’t be able to implement a system that interfaced with somebody else’s system if it’s not in the right political jurisdiction.

(02:15:04)
On the other hand, three entrepreneurs in Nigeria on the weekend could create a website that would trade in this Lightning economy using open protocols without asking anybody’s permission. So, you’re talking about something that’s like a million times cheaper, less friction, and faster to do it if you want to get money to move.

Bitcoin’s role during wartime

Lex Fridman
(02:15:27)
What do you think that looks like? So, now there’s a war going on in Ukraine. There’s other wars, Yemen, going on throughout the world. In this most difficult of states that a nation can be in, which is at war, civil war, or war with other nations, what’s the role of Bitcoin in this context?
Michael Saylor
(02:15:51)
I mean, Bitcoin is a universal trust protocol. A universal energy protocol, if you will. English is one. What I see is a bunch of fragmentation of applications-
Michael Saylor
(02:16:00)
Okay. What I see is a bunch of fragmentation of applications. For example, the Russian payment app is not going to work in Ukraine.
Lex Fridman
(02:16:09)
Right.
Michael Saylor
(02:16:09)
The Ukraine payment app is not going to work in Russia. The US payment apps won’t work either of those places as far as I know. So in Argentina, their payment app may not work in certain parts of Africa. So what you have is different local economies where people spin up their own applications compliant with their own local laws or in war zones, not compliant, but just spinning up.
Lex Fridman
(02:16:41)
So how do you build something that’s not compliant? What is the revolutionary act here when you don’t agree with the government or what you want to free yourself from the… So here’s the thing. When a nation is really at war, especially if it’s an authoritarian regime, it’s going to try to control the pipe, lock everything down, the spread of information. How do you break through that? Do you do the thing that you mentioned, which is you have to build another app essentially that allows you to flow of money outside the legal constraints placed on you by the government? So basically break the law, is that possible?
Michael Saylor
(02:17:23)
Metaphorically speaking, if you want to break out of the constraints of your culture, you learn to speak English. For example, it’s not illegal to speak English. Even if it is, does it matter? But English works everywhere in the world if you can speak it, and then you can tap into a global commerce and intelligence network. So Bitcoin is a language. So you learn to speak Bitcoin or you learn to speak Lightning, and then you tap into that network in whatever manner you can.
Lex Fridman
(02:17:53)
But the problem is it’s still very difficult to move Bitcoin around in Russia and Ukraine now during war. And there was a sense to me that the cryptocurrency in general could be the savior for helping people. There’s millions of refugees that are moving all around. It’s very difficult to move money around in that space to help people.
Michael Saylor
(02:18:18)
I think we’re very early. We’re very embryonic here. If you look at the-
Lex Fridman
(02:18:23)
Who’s we? Sorry, we as a human civilization or we operating in the cryptocurrency space?
Michael Saylor
(02:18:28)
I think the entire crypto economy is very embryonic and the human race’s adoption of it is embryonic. We’re like 1%, 2% down that adoption curve. If you take Lightning for example, the first real commercial applications of Lightning are just in the last 12 months. So we’re like year one. We might be approaching year two of commercial Lightning adoption. And if you look at Lightning adoption, Lightning’s not built into Coinbase, is not built into Binance, is not built into FTX. Cash App just implemented the first implementation, but not all the features are built into it. There’s a few dozen, a dozen Lightning wallets circulating out there.

(02:19:15)
So I think that we’re probably going to be 36 months of software development. At the point that every Android phone and every iPhone has a Bitcoin wallet or a crypto wallet in it of sorts, that’s a big deal. If Apple embraced Lightning, that’s a big deal.
Lex Fridman
(02:19:37)
So the adoption is the thing… In a war zone adoption, the people who struggle the most in war are people who weren’t doing that great before the war started. They don’t have the technological sophistication.

Jack Dorsey

Michael Saylor
(02:19:53)
Sure.
Lex Fridman
(02:19:53)
The hackers and all those kinds of people will find a way. It’s just regular people who are just struggling to make day by day living. And so if the adoption permeates the entire culture, then you can start to move money around in the digital space. If you can psychoanalyze Jack Dorsey for a second. So he’s one of the early adopters or he’s one of the people pushing the early adoption, this layer three, so inside Cash App. What do you make of the man of this decision as a business owner, as somebody playing in the space? Why did he do it and what does that mean for others at the scale that might be doing the same? So incorporating Lightning networking, incorporating Bitcoin into their products.
Michael Saylor
(02:20:46)
I think he’s been pretty clear about this. He feels that Bitcoin is an instrument of economic empowerment for billions of people that are unbanked and have no property rights in the world. If you want to give an incorruptible bank to 8,000,000,000 people on the planet, that’s the same as asking the question, “How do you give a full education through PhD to 8,000,000,000 people on the planet?” And the answer is a digital version of the 20th century thing running on a mobile phone, and Bitcoin is a bank in cyberspace, is run by incorruptible software and it’s for everybody on earth.

(02:21:36)
So I think when Jack looks at it, he’s very sensitive to the plight of everybody in Africa. If you look at Africans, you’re going to give them banks. You’re not going to put a bank branch on every corner. That’s an obscene waste of energy. You’re not going to run copper wires across the continent. That’s an obscene waste of energy. You’re not going to give them gold. So how are you going to provide people with a decent life?

(02:22:03)
The metaphor I think is relevant here, the biological metaphor, Lex, is type one diabetic. If you’re a type one diabetic, you can’t form fat. And if you can’t form fat, then you can’t store excess energy. Fat is the ultimate organic battery, and if you’ve got 30 pounds of it, you can go 60 days without eating. But if you can’t generate insulin, you can’t form fat cells. And if you can’t form fat cells and store energy, then you can eat yourself to death. You will eat and you will die. You’ll starve to death. So the lack of property rights is like being a type one diabetic. And so if you look at most people everywhere in the world, they don’t have property rights, they don’t have effective bank and their currency is broken.

(02:22:56)
What are the two things that in theory would serve as the equivalent of an organic battery or an economic battery to civilization? It would be I have a currency which holds its value and I can store it in a bank. So a risk-free currency derivative. I pay you your money, you take your life savings, you put it in a bank, you save up for your retirement, you live happily ever after. That’s the American dream, right? That’s the idyllic situation. The real situation is there are no banks. You can’t get a bank account. So I give you your pay in currency and then I double the supply and I give it to my cousin, or I give it to whatever cause I want or I use it to buy weapons. And then you find a loaf of bread costs triple next month as what it costs and your life savings is worthless.

(02:23:53)
And so in that environment, everybody’s ripped back to Stone Age barter. And the problem with that, even Stone Age barter, is you’re going to carry your life savings on your back. And what happens when the guy with a machine gun points it at your head and just takes your life savings? So I think from Jack’s point of view, he thinks that, this is maybe too strong, but these are my words, life is hopeless for a lot of people and Bitcoin is hope because it gives everyone an engineered monetary asset that’s a bearer instrument and it gives them a bank on their mobile phone and they don’t have to trust their government or another counterparty with their life force.

(02:24:46)
So there’s a secondary thing I think he’s interested in, which is… The first thing is the human rights issue. And the second thing would be the friction to trade cross borders is so great. You like AI. So I’ll give you a beautiful notion. Maybe one day there’ll be an artificially intelligent creature in cyberspace that is self-sufficient and rich.
Lex Fridman
(02:25:21)
It would have sovereignty. You mean-
Michael Saylor
(02:25:23)
Can a robot own money or property? How about can a Tesla car? Can I actually put enough money in a car for it to drive itself and maintain itself forever? Or can I create an artificially intelligent creature in cyberspace that is endowed such that it would live 1,000 years and continue to do its job? We have a word for that in the real world. It’s institution, Harvard, Cambridge, Stanford. Right? There are institutions with endowments that go on in perpetuity, but what if I wanted to perpetuate a software program?

(02:26:04)
And with something like digital property with Bitcoin and Lightning, you could do it. And on the other hand, with banks and credit cards, you couldn’t ever. So you can create things that are beautiful and lasting and what’s the difference in speed? Well, so I can either trade with everybody in the world at the speed of light, friction-free in 24 hours writing a Python script, or I can spend $100 billion to trade with a few million people in the world after it takes them six months of application. The impedance is like a 10,000,000 to one difference, and the metaphors are literally like launching something in orbit versus almost orbit or vacuum sealing something. Does it last forever and does it orbit forever or does it go up and come down and burn up? Right? And I think Jack is interested in putting freedom in orbit, right?
Lex Fridman
(02:27:22)
Putting freedom-
Michael Saylor
(02:27:23)
Putting freedom in orbit. And he said it many times, he said, “The internet needs a native currency.” Right? And no political construct or security can be a native currency. You need a property and you need a property that can be moved 1,000,000 times a second. Can you oscillate it at 10 kilohertz or 100 kilohertz? And the answer is only if it’s a pure digital construct, permissionless and open. And so I think that he’s enthusiastic as the technologist and he’s enthusiastic as the humanitarian. And what he’s doing is to support both those areas. He’s supporting the Bitcoin and the Lightning protocol by building them into his products, but he’s also building the applications which you need at the Cash App level in order to commercialize and deliver the functionality and the compliance necessary, and they’re related.
Lex Fridman
(02:28:23)
And I should also say he’s just a fascinating person for a random reason that I couldn’t even explain if I tried. I met him a few days ago and gave him a great big hug in the middle of nowhere. There was no explanation. He just appeared. That’s a fascinating human. His relationship with art, with the world, with human suffering, with technology is fascinating. I don’t know what his path looks like, but it’s interesting that people like that exist. And in part, I’m saddened that he no longer is involved with Twitter directly as a CEO because I was hoping something inside Twitter would also integrate some of these ideas of what you’re calling digital energy to see how social networks, something I’m really interested in and passionate about, could be transformed.

(02:29:19)
Let me ask you, just for educational purposes. Can you please explain to me what Web3 and the beef between Jack and Mark Andreessen is exactly? Did you see what happened? Sorry to have you analyze Twitter like it’s Shakespeare, but can you please explain to me why there was any drama over this topic?
Michael Saylor
(02:29:42)
First of all, Web3 is a term that’s used to refer to the part of the economy that’s token finance. So if I’m launching an application and my idea is to create a token along with the application and issue the token to the community so as to finance the application and build support for it, I think that that’s the most common interpretation of Web3. There are other interpretations too, so I’m just going to refer to that one. And I think the beef in a nutshell, not articulated, but I’ll articulate it, is whether or not you should focus all your energy creating applications on top of an ethical digital property like Bitcoin or whether you should attempt to create a competitor to it, which generally would be deemed as a security by the Bitcoin community?

(02:30:40)
So I’m going to put on my Bitcoin hat here. Right? If it’s driven by a venture capitalist, well, it’s a security. If there’s a CEO and a CTO, it’s a security. All these projects, they’re companies. Foundations are companies. Right? If you call them a project or a foundation, it doesn’t make it not a security. They’re all in essence, collections of individuals that are issuing equity in the form of a token. And if there’s a pre-mine, an IPO, an ICO, a foundation or any kind of protocol where there’s a group of engineers that have influence over it, then to a securities lawyer or to most Bitcoiners, and definitely to anybody that’s steeped in securities law, you look at it and say, “Well, that passes the Howey test.” It looks like a security. It should be sold to the public pursuant to disclosures and regulations, and you’re just ducking the IPO process. Right?

(02:31:50)
And so now we get back to the ethical issue. Well, the ethical issue is if you’re trading it as a commodity and representing it as a commodity, while truthfully it’s a security, then it’s a violation of ethics rules and it’s probably illegal.
Lex Fridman
(02:32:07)
Well, you keep leaning on this. Let me push back on that part. Maybe you can educate me, but you keep leaning on this line of securities law, with all due respect to lawyers, as if that line somehow defines what is and isn’t ethical. I think there’s a lot of correlation as you’ve discussed, but I’d like to leave the line aside. If the law calls something a security, it doesn’t mean in my eyes that it is unethical. There could be some technicalities and lawyers and people play games with this kind of stuff all the time. But I take your bigger point that if there’s a CEO, if there’s a project lead that’s fundamentally… Well, that to you is fundamentally different than the structure of Bitcoin.
Michael Saylor
(02:32:54)
It’s not that creating securities is unethical. I created security. I took a company public. Right? That’s not the unethical part. It’s completely ethical to create securities. Block is a security, all companies are securities. The unethical part is to represent it as property when it’s a security and to promote it or trade it as such.
Lex Fridman
(02:33:16)
This whole promotion, that’s also a technical thing because what counts as not as promotion is a legal thing and you get in trouble for all these things, but that’s the game that lawyers play. There’s an ethical thing here, which is like what’s right to promote a not? To me, propaganda is unethical, but it’s usually not illegal.
Michael Saylor
(02:33:45)
You roll clock back 20 years, right? All the boiler room pump and dump schemes were all about someone pitching a penny stock-
Lex Fridman
(02:33:52)
Sure.
Michael Saylor
(02:33:53)
Selling swampland in Florida. And if you roll the clock forward 20 years, and I create my own company and I represent it as the same thing, and I don’t make the disclosures, you’re just one step removed from the boiler room scheme, and that’s what’s distasteful about it. There are ways to sell securities to the public, but there are expectations. Maybe we could forget about whether the security laws are ethical or not, right? I will leave that alone. We’ll just start with the biblical definition of ethics.
Lex Fridman
(02:34:29)
Yes.
Michael Saylor
(02:34:29)
Don’t lie, cheat or steal. So if I’m going to sell something to you, I need to fully disclose what I’m selling to you, and that’s a matter of great debate right now. So I think that that’s part of the debate, but the other part of the debate is whether or not we need more than one token, we need at least one. Right?
Lex Fridman
(02:34:30)
Yes.
Michael Saylor
(02:34:58)
We need at least one digital property.
Lex Fridman
(02:34:59)
One is better than zero.
Michael Saylor
(02:35:01)
Because zero means there is no digital economy.
Lex Fridman
(02:35:05)
Yes.
Michael Saylor
(02:35:06)
And by the way, the conventional view of maximalists is they think there’s only one and everything else isn’t. That’s not the point I’m going to make. I would say we know there is at least one digital property and that is Bitcoin. If you can create a truly decentralized, non-custodial bearer instrument that is not under the control of any organization that is fairly distributed, then you might create another or multiple and there may be others out there. But I think that the frustration of a lot of people in the Bitcoin community, and I share this with Jack, is we could create $100 trillion of value in the real world simply by building applications on top of Bitcoin as a foundation. And so continually trying to reinvent the wheel and create competitive things is a massive waste of time and it’s diversion of human creativity. It’s like we have an ethical good thing, and now we’re going to try to create a third or a fourth one. Why?

Bitcoin conflict of interest

Lex Fridman
(02:36:29)
Well, let’s talk about it. So first of all, I’m with you, but let me ask you this interesting question because we talked about properties and securities. Let’s talk about conflict of interest. You said you could advertise… You have a popular Twitter account. It’s hilarious and insightful. You do promote Bitcoin in a sense. I don’t know if you would say that, but do you think there’s a conflict of interest in anyone who owns Bitcoin, promoting Bitcoin? Is it the same as you promoting farming?
Michael Saylor
(02:37:03)
I would say no. There’s an interest. I think that you can promote a property or an idea to the extent that you don’t control it. I think that the point at which you start to have a conflict of interest is when you’re promoting a proprietary product or proprietary security. A security in general is proprietary asset. So for example, if you look at my Twitter, you’ll find that I make lots of statements about Bitcoin. You won’t ever see me making a statement that say micro strategy stock will go it forever. I’m not promoting a security MSTR because at the end of the day, MSTR is a security. It is proprietary. I have proprietary interest in it. I have a disproportionate amount of control and influence on the direction. Whereas-
Lex Fridman
(02:38:01)
The control is the problem. The control is the problem because you have interest in both. If Bitcoin is as successful as we’re talking about, you very possibly can become the richest human on earth given how much you own in Bitcoin. The wealthiest, not the richest. I don’t know what those words mean.
Michael Saylor
(02:38:22)
I would benefit economically.
Lex Fridman
(02:38:24)
You would benefit economically.
Michael Saylor
(02:38:26)
That’s true.
Lex Fridman
(02:38:26)
So the reason that’s not conflict of interest is because the word property that Bitcoin is an idea and Bitcoin is open-
Michael Saylor
(02:38:37)
It’s because I don’t own it. I don’t control it. In essence, the ethical line here is could I print myself 10,000,000 more Bitcoin or not? Right?
Lex Fridman
(02:38:51)
Or can anyone? Right? It’s not just you. It’s can anyone? Because can you promote somebody else’s? Yes, I guess you can. Can you promote Apple when you have no stake?
Michael Saylor
(02:39:04)
You could have a Twitter account where you promote oil or you promote camping or you promote family values or promote a carnivore diet or promote the Iron Man, right?
Lex Fridman
(02:39:16)
But you’re not going to get wealthier if you promote camping because you can’t own a stake in… You own a lot of Bitcoin. What is that? Don’t you own the stake in the idea of Bitcoin?
Michael Saylor
(02:39:31)
Yeah, I would grant you that.
Lex Fridman
(02:39:34)
But the lack of control is the fundamental ethical line that you don’t have… All you are is you’re a fan of the idea. You believe in the idea and the power of idea.
Michael Saylor
(02:39:47)
Yeah, I think-
Lex Fridman
(02:39:47)
You can’t take that idea away from others.
Michael Saylor
(02:39:51)
Let me give you some maybe easier examples. If you were the Head of the Marine Corps and someone came to you and said, “I created Marinecoin, and the twist on Marinecoin is I want you to tell every Marine that they’ll get an extra Marinecoin when they get their next stripe. And then I’m going to let you buy Marinecoin now, and then after you buy Marinecoin, I want you to promote it to them.” At some point, if you start to have a disproportional influence on it, or if you’re in a conversation with people with disproportionate influence becomes conflict of interest and it would make you profoundly uncomfortable, I think, if the Head of the Marine Corps started promoting anything that looked like a security.

(02:40:45)
Now, if the Head of the Marine Corps started promoting canoeing, you might think he’s wacky. Maybe that’s a waste of time and a distraction. But to the extent that canoeing is not a security, not a problem, unless you… Ultimately, the issue of decentralization is really a critical one.
Lex Fridman
(02:41:08)
So not having a head. Can Bitcoin be replicated? So all the things that you’re saying that make it a property, can that be replicated? Have any other-
Michael Saylor
(02:41:23)
I think it’s possible to create other crypto properties.
Lex Fridman
(02:41:26)
Does the having a head of a project, a thing that limits its ability to be a property if you try to replicate a project? Is that the fundamental flaw?
Michael Saylor
(02:41:40)
No. Look, I think the real fundamental issue is you just never want it to change. If you really want something decentralized, you want a genetic template that substantially is not going to change for 1,000 years. So I think Satoshi said it at one point. He said, “The nature of the software is such that by version 0.1, its genetic code was set.” If there was any development team that’s continually changing it on a routine basis, it becomes harder and harder to maintain its decentralization because now there’s the issue of who is influencing the changes?

(02:42:23)
So what you really want is a very, very simple idea. The simplest idea, I’m just going to keep track of who owns 21,000,000 parts of energy? And when someone proposes big functional upgrades, you don’t really want that development to go on to base layer. You want that development to go on to layer threes because now Cash App has a proprietary set of functionality and it’s a security. And if you’re going to promote the use of this thing, you’re not going to promote the layer three security because that’s an edge to a given entity and you’re trusting the counterparty. You’re going to promote the layer one or at most the layer two.

Satoshi Nakamoto

Lex Fridman
(02:43:13)
Okay, so one of the fascinating things about Bitcoin, and sorry to romanticize certain notions, but Satoshi Nakamoto that the founder is anonymous. Maybe you can speak to whether that’s useful, but also I just like the psychology of that to imagine that there’s a human being that was able to create something special and walk away. Though first, are you Satoshi Nakamoto?
Michael Saylor
(02:43:40)
I’m certain I’m not. No, actually I think the providence is really important, and if I were to look at the highlighted points, I think having a founder that was anonymous or stood anonymous is important. I think the founder disappearing is also important. I think that the fact that the Satoshi coins never moved is also important. I think the lack of an initial coin offering is also important. I think the lack of a corporate sponsor is important. I think the fact that it traded for 15 months with no commercial value was also important. I think that the simplicity of the protocol is very important. I think that the outcome of the block size wars is very important and all of those things add up to common property. They’re all indicia, indicators of a digital property as opposed to security.

(02:44:45)
If there was a Satoshi sitting around, sitting on top of $50 billion worth of Bitcoin, I don’t think it would cripple Bitcoin as property, but I think it would undermine its digital property. And if I wanted to undermine a crypto asset network, I would do the opposite of all those things. I would launch one myself. I would sell 25% or 50% to the general public. I would pre-mine some stuff or early mine it and I would keep an influence on it. Those are all the opposite of what you would do in order to create common property. And so I see the entire story as Satoshi giving a gift of digital property to the human race and disappearing.
Lex Fridman
(02:45:38)
Do you think it was one person? Do you have ideas of who it could be?
Michael Saylor
(02:45:41)
I don’t care to speculate.
Lex Fridman
(02:45:45)
But do you think it was one person?
Michael Saylor
(02:45:47)
I think it was one person, maybe in conjunction with a bunch of others. It might’ve been a group of people that were working together, but certainly there’s a Satoshi.
Lex Fridman
(02:45:56)
It’s just so fascinating to me that one person could be so brave and thoughtful. Or do you think a lot of his accent, like the block size wars, the decision to make a block a certain size, all the things you mentioned led up to the characteristics that make Bitcoin property? Do you think that’s an accident or was it deeply thought through? This is almost like a history of science question.
Michael Saylor
(02:46:22)
They tried 40 of them, right? I think there’s a history of attempting to create something like this, and it was tried many, many times and they failed for different reasons. And I think that it’s like Prometheus tried to start a fire 47 times and maybe the 48 time it sparked, and that’s how I see this. This is the first one that sparked, and it sets a roadmap for us. And I think if you’re looking for any one word that characterize, it’s fair. The whole point of the network is it’s a fair launch, a fair distribution. I have Bitcoin, but I bought it. In fact, at this point, we’ve paid $4 billion of real cash to buy it. If I was sitting on the same position and I had it for free or I bought it for a nickel, a coin, or a penny of coin, the question is, was it fair? And that’s a very hard question to answer, right? Did you acquire the Bitcoin that you own fairly? And if you roll the clock back, you could have bought it for a nickel or a dime, but that was when it was 1,000,000 times more likely to fail, right? When the risk was greater, the cost was lower, and then over time, the risk became lower and the cost became greater.

(02:47:50)
And the real critical thing was to allow the marketplace absent any powerful, interested actor, right? If Satoshi had held 1,000,000 coins and then stayed engaged for 10 more years, tweaking things in the background, there’d still be that question. But what we’ve got is really a beautiful thing. We’ve got a chain reaction in cyberspace or an ideology spreading virally in the world that has seasoned in a fair, ethical fashion. Sometimes it’s a very violent, brutal fashion with all the volatility, and there’s been a lot of sound and fury along the way.

Volatility

Lex Fridman
(02:48:36)
How do you psychoanalyze? How do you deal from a financial, from a human perspective with the volatility? You mentioned you could have gotten it for a nickel and the risk was great. Where’s the risk today? What’s your sense?
Michael Saylor
(02:48:50)
We’re 13 years into this entire activity. I think the risk has never been lower. If you look at all the risks, the risks in the early years are is the engineering protocol proper? One megabyte, block size, 10 minute clock frequency, cryptography is first, will it be hacked or will it crash? 730,000 blocks and it hasn’t crashed. Will it be hacked? Hasn’t been hacked. It’s a Lindy thing, right? You wait 13 years to see if it’ll be hacked. But on the other hand, with $1 billion, it’s not as interesting a target as it is with $ 100 billion. And when it gets to be worth $1 trillion, then it’s a bigger target.

(02:49:34)
So the risk has been bleeding off over time as the network monetized. I think the second question is, will it be banned? You couldn’t know. It literally could have been banned many times early on. In fact, 2013, I tweeted on the subject. I thought it would be banned. I made a very infamous tweet.
Lex Fridman
(02:49:57)
Infamous tweet, yeah.
Michael Saylor
(02:49:58)
I thought it was going to be banned. In 2014, the IRS designated it as…
Michael Saylor
(02:50:00)
In 2014, the IRS designated it as property and gave it property tax treatment. They could have given it a tax treatment where you had to pay tax on the unrealized capital gains every year, and it probably would’ve crushed it to death. Right? So it could have been in any number of places banned by a government, but in fact, it was legitimized as property. And then the question is, would it be copied? Will it be something better than that? And it was copied 15,000 times. And you know the story of all those, and they either diverged to be something totally different and not comparable, or someone trying to copy a non-sovereign bearer instrument store of value found that their networks crashed to be 1% of what Bitcoin is. So now we’re sitting at a point where all those risks are out of the way.

(02:50:59)
I would say that year one of institutional adoption is it started August 2020. That’s when MicroStrategy bought $250 million worth of Bitcoin and we put that out on the wire. We were the first publicly traded company to actually buy Bitcoin. I don’t think you could have found a $5 million purchase from a public company before we did that. So that was like a gun going off. And then in the next 12 months, Tesla bought Bitcoin, Square bought Bitcoin. I’d say now we’re in year two of institutional adoption. Should be 24 publicly traded Bitcoin miners by the end of this quarter. So you’re looking at 36 publicly traded companies, and you’ve got at least in the range of $50 billion of Bitcoin on the balance sheet of publicly traded companies and hundreds of billions of dollars of market cap of Bitcoin-exposed companies. So I would say the asset, decade one was entrepreneurial experimental. Decade two is a rotation from entrepreneurs to institutions and it’s becoming institutionalized. So maybe decade one, you go from zero to a trillion and in decade two you go from 1 trillion to 100 trillion.
Lex Fridman
(02:52:22)
What about government adoption? You said institutional adoption, are governments important in this, maybe making it some governments incorporating it as a currency into their banks, all that kind of stuff? Is that important? And if it is, when will it happen?
Michael Saylor
(02:52:42)
It’s not essential for the success of the asset class, but I think it’s inevitable in various degrees over time. But the most likely thing to happen next is large acquisitions by institutional investors of Bitcoin as a digital gold, where they’re just swapping out gold for digital gold and thinking of it like that. The government entities most likely to be involved with that would be sovereign wealth funds. If you look at all the sovereign wealth funds that are holding big tech stock equities, the Swiss, the Norwegians, the Middle Easterners, if you can hold big tech then holding digital gold would be not far removed from that. That’s a non-controversial adoption.

(02:53:33)
I think there are opportunities for governments that are much more profound. If a government started to adopt Bitcoin as a Treasury Reserve asset, that’s much bigger than just an asset investment that’s 100X bigger. And you could imagine, that’s like a trillion dollar opportunity. Like any government that wanted to adopt it as a Treasury Reserve asset would probably generate a trillion or more of value. And then the thing that people think about is, “Well, will oil ever be priced in Bitcoin or any other export commodity?” I think there’s $1.8 trillion or more of export commodities in the world, and right now they’re all priced in dollars. I think that this is a colorful thing, but not really that relevant. You could sell all that stuff in dollars. The relevant decision that any institution makes, whether they’re a nonprofit, a university, a corporation, or a government, is what’s your Treasury Reserve asset? And if your Treasury Reserve asset is the peso, and if the peso is losing 20% or 30% of its value a year, then your balance sheet is collapsing within five years.

(02:54:57)
And if the Treasury Reserve asset is dollars in currency derivatives and US Treasuries, then you’re getting your seven… Right now it’s probably 15% or more monetary inflation. We’re running double the historic average. You could argue triple. Somewhere between double and triple depending upon what your metric is. So, do I think it’ll happen? I think that they’re conservative, but they have to be shocked, and I think there is a shock. The late Russian sanctions are a big shock that when the West sees $300 billion worth of Russian gold in currency derivatives, I think you got the famous quote by Putin that we have to rethink our Treasury strategies. And that pushes everybody toward a commodity strategy, “What commodities do I want to hold?” I think that’s got a lot of people thinking. I think it’s got the Chinese thinking. Everybody wants to be the reserve currency, so if I buy $50 billion worth of dollars every year, then I buy 500 billion over a decade and I probably pay $250 billion of inflation costs on the backs of my citizens in a decade.
Lex Fridman
(02:56:20)
So inflation could be one of the sources of shock. You wonder if there is a switch to Bitcoin whether it would be a bang or a whimper. What is the nature of the shock of the transition?
Michael Saylor
(02:56:32)
I think that the year 2022 is pretty catalytic for digital assets in general and for Bitcoin in particular. The Canadian trucker crisis I think educated hundreds of millions of people and made them start questioning their property rights and their banks. I think the Ukraine war was a second shock, but I think that the Russian sanctions was a third shock. And I think hyperinflation in the rest of the world is a fourth shock. And then persistent inflation in the US is a fifth shock.

(02:57:14)
So I think it’s a perfect storm. And if you put all these events together, what do they signify? They signify the rational conclusion for any person thinking about this is, “I’m not sure if I can trust my property. I don’t know if I have property rights. I don’t know if I can trust the bank. And if I’m politically at odds with the leader of my own country, I’m going to lose my property. And if I’m politically at odds with the owner of another country, I’m still going to lose my property. And when push comes to shove, the banks will freeze my assets and seize them.”

(02:57:56)
And I think that that is playing out in front of everybody in the world such that your logical response would be, “I’m going to convert my weak currency to a strong currency. Like, I’ll convert my peso and lira to the dollar. I’m going to convert my weak property to strong property. I’m going to sell my building Downtown Moscow, and I’d rather own a building in New York City. I’d rather own in a powerful nation than be stuck with a building in Nigeria or a building in Argentina or whatever. So I’m going to sell my weak properties to buy strong properties. I’m going to convert my physical assets to digital assets. I’d rather own a digital building than own a physical building, because if I had a billion dollar building in Moscow, who can I rent that to? But if I have a billion dollar digital building, I can rent it to anybody in any city in the world, anybody with money, and the maintenance cost is almost nothing, and I can hold it for 100 years.” So it’s an indestructible building.

(02:59:09)
And then finally, I want to move from having my assets in a bank with a counterparty to self-custody assets. It is not just Ukraine, but this is like the story in Turkey, Lebanon, Syria, Afghanistan, Iraq, South America. You don’t really want to be sitting with $10 million in a bank in Istanbul. The bank’s going to freeze your money, convert it to lira, devalue the lira, and then feed it back to you over 17 years, right?
Lex Fridman
(02:59:42)
So self-custody assets would be layer one Bitcoin?
Michael Saylor
(02:59:46)
Self-custody assets is like if I got my own hardware wallet and I’ve either got… Your highest form of self-custody would be Bitcoin on your own hardware wallet or Bitcoin on your own self-custody. And the other thing people think about is, “How do I get crypto dollars tether some stable coin?” If you had a choice, would you rather have your money in a bank in a war zone in dollars or have your money in a stable coin on your mobile phone in dollars? I mean, you would take the latter risk rather than the former risk.
Lex Fridman
(03:00:26)
In a war zone, definitely, yeah.
Michael Saylor
(03:00:29)
And you can see that happening. We’ve gone from 5 billion in stable coins to 200 billion in the last 24 months. So I do think there’s massive demand for crypto dollars in the form of a US dollar asset. Everybody in the world would say, “Yeah, I want that.” Well, unless you’re just an extreme patriot. But most people in the world would say, “I want that.” And then a lesser group of people would say, “I think I want to be able to carry my property in the palm of my hand so I have self-custody of it.”

Bitcoin price

Lex Fridman
(03:01:04)
So Bitcoin price has gone through quite a roller coaster. What do you think is the high point it’s going to hit?
Michael Saylor
(03:01:12)
I think it’ll go forever. I mean, I think the Bitcoin, it’s going to climb in a serpentine fashion. It’s going to advance and come back and it’s going to keep climbing. I think that the volatility attracts all the capital into the marketplace. And so the volatility makes it the most interesting thing in the financial universe. It also generates massive yield and massive returns for traders, and that attracts capital. We’re talking about the difference between 5% return and 500% return.

(03:01:49)
So the fast money is attracted by the volatility. The volatility has been decreasing year by year by year. I think that it’s stabilizing. I don’t think we’ll see as much volatility in the future as we have in the past. I think that if we look at Bitcoin and model it as digital gold, the market cap goes to between 10 and 20 trillion. But remember, gold is defective property. Gold is dead money. You have a billion dollars of gold that sits in a vault for a decade, it’s very hard to mortgage the gold. It’s also very hard to rent the gold. You can’t loan the gold. No one’s going to create a business with your gold.

(03:02:36)
So gold doesn’t generate much of a yield. So for that reason, most people wouldn’t store a billion dollars for a decade in gold. They would buy a billion dollars of commercial real estate property. The reason why is because I can rent it and generate a yield on it that’s in excess of the maintenance cost. So if you consider digital property, that’s 100 to $200 trillion addressable market. So I would think it goes from 10 trillion to 100 trillion as people start to think of it as digital property.
Lex Fridman
(03:03:08)
What does that mean in terms of price per coin?
Michael Saylor
(03:03:11)
At 500,000, that’s a $10 trillion asset. At 5 million, that’s 100 trillion dollars asset.
Lex Fridman
(03:03:20)
So you think it crosses a million it can go even higher?
Michael Saylor
(03:03:24)
Yeah, I think it keeps going up forever. I mean, there’s no reason we couldn’t go to 10 million a coin. Because digital property isn’t the highest form, right? Gold was that low frequency money. Property is a mid-frequency money, but when I start to program it faster, it starts to look like digital energy. Then it doesn’t just replace property, then you’re starting to replace bonds. It’s 100 trillion in bonds, there’s 50 to 100 trillion in other currency derivatives. And these are all conventional use cases. I think that there’s 350 trillion to $500 trillion worth of currency, currency derivatives in the world. When I say that, I mean things that are valued based upon fiat cash flows. Any commercial real estate, any bond, any sovereign debt, any currency itself, any derivatives to those things, they’re all derivatives and they’re all defective. And they’re all defective because of this persistent seven to 14% lapse which we call inflation or monetary expansion.

(03:04:41)
Can we switch subjects to talk about the energy side of it-
Lex Fridman
(03:04:45)
Sure.
Michael Saylor
(03:04:45)
… like the innovative piece?
Lex Fridman
(03:04:48)
Yeah.
Michael Saylor
(03:04:48)
Let’s just start with this idea that I’ve got a hotel worth a billion dollars with 1,000 rooms. When it becomes a dematerialized hotel-
Lex Fridman
(03:04:58)
I love that word so much, by the way, dematerialized hotel.
Michael Saylor
(03:05:01)
We’re a cross from the Fontainebleau here. Imagine the Fontainebleau is dematerialized. The problem with the physical hotel is I got to hire real people moving subject to the speed of sound and physics laws and Newton’s laws, and I can rent it to people in Miami Beach. But if it was a digital hotel, I could rent the room to people in Paris, London and New York every night, and I can run it with robots. And as soon as I do that, I can rent it by the room hour, and I can rent it by the room minute. And so I start to chop my hotel up into 100,000 room hours that I sell to the highest bidder anywhere in the world. And you can see all of a sudden the yield, the rent, and the income of the property is dramatically increased.

(03:05:50)
I can also see the maintenance cost of the property falls. I get on Moore’s Law and I’m operating in cyberspace. So I got rid of Newton’s laws, I got rid of all the friction and all those problems, I tapped into the benefits of cyberspace. I created a global property. I started monetizing at different frequencies. And of course, now I can mortgage it to anybody in the world. You’re not going to be able to get a mortgage on a Turkish building from someone in South Africa. You have to find someone that’s local to the culture you’re in. So when you start to move from analog property to digital property, it’s not just a little bit better, it’s a lot better. And what I just described, Lex, is like the DeFi vision. It’s the beauty of DeFi flash loans, money moving at high velocity. At some point, if the hotel is dematerialized, then what’s the difference between renting a hotel room and loaning a block of stock? I’m just finding the highest best use of the thing.
Lex Fridman
(03:07:05)
It feels like the magic really emerges though when you build a market of layer two and layer three technologies on top of that. So maybe you can correct me if I’m wrong, but for all these hotels and all these kinds of ideas, it’s always touching humans at some point and the consumers or humans, business owners and so on. So you have to create interface. You have to create services that make all of that super efficient, super fun to use, pleasant, effective, all those kinds of things. And so you have to build a whole economy on top of that.
Michael Saylor
(03:07:44)
Yeah. I happen to think that won’t be done by the crypto industry at all. I think that’ll be done by centralized applications. I think it’ll be the citadels of the world, the high speed traders of the world, the New Yorkers. I think it’ll be Binance, FTX, and Coinbase as a layer three exchange that will give you the yield and will give you the loan and the best terms. Because ultimately, you have to jump these compliance hoops. BlockFi can give you yield, but they have to do it in a compliant way with the United States jurisdiction. So ultimately, those applications to use that digital property and either give you a loan on it or give you yield on it, are going to come from companies.

(03:08:33)
But the difference, the fundamental difference is it could be companies anywhere in the world. So if a company in Singapore comes up with a better offering, then the capital is going to start to flow to Singapore. I can’t send 10 city blocks of LA to Singapore to rent during a festival, but I can send 10 blocks of Bitcoin to Singapore. So you’ve got a truly global market that’s functioning in this asset, and it’s their second order asset. For example, maybe you’re an American citizen and you own 10 Bitcoin and someone in Singapore will generate 27% yield in the Bitcoin but legally you can’t send the money to them or the Bitcoin to them, it doesn’t matter. Because the fact that that exists means that someone in Hong Kong will borrow the 10 Bitcoin from somebody in New York, and then they will put on the trade in Singapore, and that will create a demand for Bitcoin, which will drive up the price of Bitcoin, which will result in an effective tax-free yield for the person in the US that’s not even in the jurisdiction.

(03:09:43)
So there’s nothing that’s going on in Singapore to drive up the price of your land in LA. But there is something going on everywhere in the world to drive up the price of property in cyberspace if there’s only one digital Manhattan. And so there’s a dynamic there which is profound because it’s global. But now let’s go to the next extreme. I’m still giving you a fairly conventional idea, which is, let’s just loan the money fast on a global network and let’s just rent the hotel room fast in cyberspace. But let’s move to maybe a more innovative idea. The first generation of internet brought a lot of productivity, but there’s also just a lot of flaws in it. For example, Twitter is full of garbage. Instagram DMs are full of garbage. Your Twitter DMs are full of garbage. YouTube is full of scams. Every 15 minutes there’s a Michael Saylor Bitcoin giveaway spun up on YouTube. My Office 365 inbox is full of garbage, millions of spam messages. I’m running four different email filters. My company spends million dollars a year to fight denial of service attacks and all sorts of other security things.

(03:11:03)
There are denial of service attacks everywhere against everybody in cyberspace all the time. It’s extreme. And we’ll all beset with hostility, right? You’ve been a victim of it in Twitter. You go on Twitter and people post stuff they would never say to your face. And then if you look, you find out that the account was created three days ago and it’s not even a real person. So we’re beset with phishing attacks and scams and spam bots and garbage. Why? The answer is because the first generation of internet was digital information, and there’s no energy. There’s no conservation of energy in cyberspace. The thing that makes the universe work is conservation of energy.

(03:11:49)
If I went to a hotel room, I’d have to post a credit card. And then if I smash the place up, there’d be economic consequences, maybe there’d be criminal consequences, there might be reputational consequences. A lamp might fall on me. But in the worst case, I can only smash up one hotel room. Now imagine I could actually write a Python script to send myself to every hotel room in the world every minute, not post a credit card, and smash them all up anonymously.

(03:12:26)
The thing that makes the universe work is friction, speed of sound, speed of light, and the fact that ultimately it’s conservative. You’re either energy or you’re matter, but once you’ve used the energy, it’s gone, and you can’t do infinite everything. That’s missing in cyberspace right now. And if you look at all of the moral hazards and all of the product defects that we have in all of these products, most of them, 99% of them, could be cured if we introduced conservation of energy into cyberspace. That’s what you can do with high-speed, digital property, high-speed Bitcoin. And by high-speed, I mean not 20 transactions a day, I mean 20,000 transactions a day. So how do you do that? Well, I let everybody on Twitter post 1,000 or 10,000 satoshis via a Lightning badge, “Give me an orange check.” If you put up 20 bucks once in your life, you could give 300 million people an orange check. Right now you don’t have a blue check, Lex. You’re a famous person, I don’t know why you don’t have a blue check. Have you ever applied for a blue check?

Twitter verification

Lex Fridman
(03:13:46)
No.
Michael Saylor
(03:13:46)
There are 360,000 people on Twitter with a blue check. There are 300 million people on Twitter. So the conventional way to verify accounts is elitist, archaic.
Lex Fridman
(03:14:02)
How does it work? How do you get a blue check? I mean, I’ve worked-
Michael Saylor
(03:14:05)
You got to apply and wait six months, and you have to post three articles in the public mainstream media that illustrates you’re a person of interest.
Lex Fridman
(03:14:15)
Interesting.
Michael Saylor
(03:14:15)
Generally, they would grant them to CEOs of public companies. The whole idea is to verify that you are-
Lex Fridman
(03:14:24)
Who you are.
Michael Saylor
(03:14:24)
… who you say you are. But the question is, why isn’t everybody verified? There’s a couple of Threads on that. One is some people don’t want to be doxed, they want to be anonymous. But they’re even anonymous people that should be verified, because otherwise you’re subjecting their entire following to phishing attacks and scams and hostility. But the other-
Lex Fridman
(03:14:51)
What’s the orange verification, so this idea? Can you actually elaborate a little bit more? If you put up 20 bucks…
Michael Saylor
(03:14:57)
Yeah, I think everybody on Twitter ought to be able to get an orange check if they could come up with like $10.
Lex Fridman
(03:15:03)
What is the power of that orange check? What does that verify exactly?
Michael Saylor
(03:15:08)
You basically post a security deposit for your safe passage through cyberspace. The way it would work is, if you’ve got $10 once in your life, you can basically show that you’re creditworthy. And that’s your pledge to me that you’re going to act responsibly. So you put the 10 or the $20 into the Lightning wallet, you get an orange check. Then Twitter just gives you a setting where I can say, “The only people that could DM me are orange checks. The only people that can post on my tweets are orange checks.” So instead of locking out the public and just letting your followers comment, you lock out all the unverified. And that means people that don’t want to post $10 security deposit can’t comment.

(03:15:54)
Once you’ve done those two things, then you’re in a position to monetize malice. Monetize motion or malice for that matter, but let’s just say for the sake of argument you post something and 9,700 bots spin up and pitch their whatever scam. Right now you sit and you go report, report, report, report, report, report. And if you spend an hour, you get through half of them, you waste an hour of your life, and they just spin up another 97 gazillion because they’ve got a Python script spinning it up. So it’s hopeless. But on the other hand, if you report them, and they really are a bot, Twitter’s got a method to actually delete the account. They know that they’re bots. The problem is not they don’t know how to delete the account, the problem is there are no consequences when they delete the account. So if there are consequences, Twitter, they could just seize the $10 or seize the $20 because it’s a bot, it’s a malicious criminal act or whatever as a violation of the platform rules. You end up seizing $10,000, give half the money to the reporter, and half the money to the Twitter platform.
Lex Fridman
(03:17:08)
That’s a really powerful idea, but that’s adding friction akin to the kind of friction you have in the physical world. You have consequences. You have real consequences.
Michael Saylor
(03:17:19)
It’s putting conservation of energy.
Lex Fridman
(03:17:21)
Conservation of energy, but-
Michael Saylor
(03:17:22)
There’s no friction. There’s no nothing on this earth. I mean, you can’t walk across the room without friction. Friction is not bad. Unnecessary friction is bad. So in this particular case, you’re introducing conservation of energy, and in essence, you’re introducing the concept of consequence or truth into cyberspace. And that means if you do want to spin up 10 million fake-less Fridmans…
Lex Fridman
(03:17:54)
It’s going to cost-
Michael Saylor
(03:17:55)
It’s going to cost you $100 million to spin up 10 million fake Lexes.
Lex Fridman
(03:18:00)
But the thing is, you could do that with the dollar, but your case, you’re saying that it’s more tied to physical reality when you do that with Bitcoin?
Michael Saylor
(03:18:10)
Yeah. Well, let’s follow up on that idea a bit more. If you did do it with the dollar, then the question is, how does six billion people deposit the dollars? Could you do it with a credit card? How do you send dollars? Well, you have to dox yourself. It’s not easy.
Lex Fridman
(03:18:31)
Sure.
Michael Saylor
(03:18:31)
So you’re talking about inputting a credit card transaction, doxing yourself, and now you’ve just eliminated the two billion people that don’t have credit cards or don’t have banks. You’ve also got a problem with everybody that wants to remain anonymous. But you’ve also got this other problem, which is credit cards are expensive transactions, low frequency, slow settlement. So do you really want to pay 2.5% every time you actually show a $20 deposit? Maybe you could do a kludgy version of this for a subset of people.

(03:19:09)
It’s 10% is good if you did it with conventional payment rails. But what you can’t do is the next idea, which is I want the orange badge to be used to give me safe passage through cyberspace tripping across every platform. So how do I solve the denial of service attacks against a website? I publish a website, you hit it with a million requests. Okay, now how do I deal with that? Well, I can lock you out and I can make it a zero trust website, and then you have to be coming at me through a trusted firewall or with a trusted credential. But that’s a pretty draconian thing. Or I could put it behind a Lightning wall. A Lightning wall would be I just challenge you, “Lex, you want to browse my website, you have to show me your 100,000 satoshis. Do you have 100,000 satoshis?”

(03:20:10)
Click. Okay. Now you click away 100 times or 1,000 times. And after 1,000 times, I’m like, “Well, now Lex, you’re getting offensive, I’m going to take a satoshi from you or 10 satoshis, a microtransaction. You want to hit me a million times? I’m taking all your satoshis and locking you out.” What you want to do is you want to go through 200 websites a day, and what you want every time you cross a domain, you need to be able to in a split second prove that you’ve got some asset. And now when you cross back, when you exit domain, you want to fetch your asset back.

(03:20:48)
So, how do I in a friction-free fashion browse through dozens or hundreds of websites, post a security deposit for safe passage, and then get it back? You couldn’t afford to pay a credit card fee each time. When you think about 2.5% as a transaction fee, it means you trade the money 40 times and it’s gone. It’s gone.
Lex Fridman
(03:21:14)
Yeah. So you can’t do this hopping around through the internet with this kind of verification that grounds you to a physical reality. It is a really, really interesting idea. Why hasn’t that been done?
Michael Saylor
(03:21:27)
I think you need two things. You need an idea like a digital asset like Bitcoin, that’s a bearer instrument for final settlement. And then you need a high-speed transaction network like Lightning, where the transaction cost might be a 20th of a penny or less. And if you roll the clock back 24 months, I don’t think you had the Lightning Network in a stable point. It’s really just the past 12 months. It’s an idea you could think about this year, and I think you need to be aware of Bitcoin as something other than a scary speculative asset. So I really think we’re just at the beginning.

Second best crypto

Lex Fridman
(03:22:12)
The embryonic stage. I have to ask, Michael Saylor, you said before there’s no second best to Bitcoin. What would be the second best? Traditionally there’s Ethereum with smart contracts, Cardano with proof of stake, Polkadot with interoperability between blockchains, Dogecoin has the incredible power of the meme, privacy with Monero. I just can keep going. There’s, of course, after the block size wars the different offshoots of Bitcoin.
Michael Saylor
(03:22:48)
I think if you decompose or segment the crypto market, you’ve got crypto property, Bitcoin is the king of that, and other Bitcoin forks that wanted to be a bearer instrument, store of value would be a property, a Bitcoin Cash or Litecoin, something like that. Then you’ve got cryptocurrencies. I don’t think Bitcoin is a currency because a currency I define in nation-state sense. A currency is a digital asset that you can transfer in a transaction without incurring a taxable obligation. So that means it has to be a stable dollar or a stable euro or a stable yen, a stable coin.

(03:23:30)
So I think you’ve got cryptocurrencies, Tether, Circle, most famous. I think you’ve got crypto platforms, and Ethereum is the most famous of the crypto platforms, the platform with smart contract functionality, et cetera. And then I think you’ve got just crypto securities. It’s just like my favorite whatever meme coin, and I love it because I love it, and it’s attached to my game or my company or my persona or my whatever. I think if you pushed me and said, “Well, what’s the second best?” I would say the world wants two things…
Michael Saylor
(03:24:00)
And said, “Well, what’s the second best?” I would say the world wants two things. It wants crypto property as a savings account, and it wants cryptocurrency as a checking account. And that means that the most popular thing really is going to be a stablecoin dollar. And there’s maybe a fight right now, might be Tether, right? But a stable dollar, because I feel like the market opportunity… It’s not clear that there’ll be one that will win. The class of stable dollars is probably a one to $10 trillion market easily. I think that in the crypto platform space, Ethereum will compete with Solana and Binance Smart chain and the like-
Lex Fridman
(03:24:44)
Are there certain characteristics of any of them that stand out to you? Don’t you think the competition is based on a set of features? Also… So the set of features that a cryptocurrency provides, but also the community that it provides, don’t you think the community matters and the adoption, the dynamic of the adoption both across the developers and the investors?
Michael Saylor
(03:25:07)
If I’m looking at them, the first question is what’s the regulatory risk? How likely is it to be deemed a property versus security? And the second is what’s the competitive risk? And the third is what’s the speed and the performance? All those things lead to the question of what’s the security risk? How likely is it to crash and burn? And how stable or unstable is it? And then there’s the marketing risk. There are different teams behind each of these things and communities behind them. I think that the big cloud looming over the crypto industry is regulatory treatment of cryptocurrencies and regulatory treatment of crypto securities and crypto platforms. And I think that won’t be determined until the end of the first Biden administration. For example, there are people that would only US FDIC insured banks to issue cryptocurrencies. They want JP Morgan to issue a crypto dollar backed one-to-one.

(03:26:11)
But then in the US right now, we have Circle and we have other companies that are licensed entities that are backed by cash and cash equivalents, but they’re not FDIC insured banks. There’s also a debate in Congress about whether state-chartered banks should be able to issue these things. And then we have Tether and others that are outside of the US jurisdiction. They’re probably not backed by cash and cash equivalents. They’re backed by stuff, and we don’t know what stuff. And then finally you have UST and DAI, which are algorithmic stablecoins, that are even more innovative, further outside the compliance framework. So if you ask who’s going to win, the question is really, I don’t know. Will the market decide or will the regulators decide? If the regulators get out of the way and the market [inaudible 03:27:03] Well then it’s an interesting discussion.

(03:27:05)
And then I think that all bets are off if the regulators get more heavy-handed with this. And I think you could have the same discussion with crypto properties, like the DeFi exchanges and the crypto exchanges, the SEC would like to regulate the crypto exchanges, they’d like to regulate the DeFi exchanges. That means they may regulate the crypto platforms, and at what rate and in what fashion? And so I think that I could give you an opinion if it was limited to competition under the current regulatory regime. But I think that the regulations are so fast moving and it’s so uncertain that you can’t make a decision without considering the potential actions of the regulators.

Dogecoin

Lex Fridman
(03:27:55)
I hope the regulators get out of the way. Can you steel man the case that Dogecoin is, I guess the second best cryptocurrency, or if you don’t consider Bitcoin a cryptocurrency, but instead a crypto property-
Michael Saylor
(03:28:08)
I would classify it as crypto property, because the US dollar is a currency. So unless your crypto asset is pegged algorithmically or stably to the value of the dollar, it’s not a currency, it’s a property or it’s an asset.
Lex Fridman
(03:28:22)
So then can you steel man the case that Dogecoin is the best cryptocurrency then? Because Bitcoin is not even in that list.
Michael Saylor
(03:28:32)
The debate is going to be whether it’s property or security, and there’s a debate whether it’s decentralized enough. So let’s assume it was decentralized. Well, it’s increasing at not quite what, 5% a year inflation rate, but it’s not 5% exponentially. It’s like plus 5 million, 5% something capped than is less… I forget the exact number. It’s an inflationary property. It’s got a lower inflation rate than the US dollar, and it’s got a much lower inflation rate than many other fiat currencies. So I think you could say that.
Lex Fridman
(03:29:11)
But don’t you see the power of meme, the power of ideas, the power of fun or whatever mechanism is used to captivate a community?
Michael Saylor
(03:29:25)
I do. But there are meme stocks. It doesn’t absolve you of your ethical and securities liabilities if you’re promoting it. I don’t have a problem with people buying a stock. It’s just… The way I divide the world is there’s investment, there’s saving, and there’s speculation, and there’s trading. So Bitcoin is an asset for saving. If you want to save money for a hundred years, you don’t really want to take on execution risk or the like. So you’re just buying something to hold forever. For you to actually endorse something as a property, if you said to me, “Mike, what should I buy for the next a hundred years?” I’d say, “Well, some amount of real estate, some amount of scarce collectibles, some amount of Bitcoin. You can run your company.” But running your company is an investment.

(03:30:21)
So the savings are properties. If you said, “What should I invest in?” I’d say, “Well, here’s a list of good companies, private companies. You could start your own company. That’s an investment.” If you said, “What should I trade?” Well, I’m trading as a proprietary thing. I don’t have any special insight into that. If you’re a good trader, you know you are. If you said to me, “What should you speculate in?” We talk about meme stocks and meme coins, and it sits up there, it sits right in the same space with what horse should you bet on and what sports team should you gamble on, and should you bet on black six times in a row and double down each time? It’s fun, but at the end of the day, it’s a speculation, right? You can’t build a civilization-
Lex Fridman
(03:31:14)
On speculation.
Michael Saylor
(03:31:15)
… on it. It’s not an institutional asset. And in fact, where I’d leave it right is Bitcoin is clearly digital property, which makes it an institutional grade investable asset for a public company, a public figure, a public investor, or anybody that’s risk-adverse. I think that the top 100 other cryptos are like venture capital investments. And if you’re a VC, and if you’re a qualified technical investor and you have a pool of capital and you can take that kind of risk, then you can parse through that and form opinions.

(03:31:49)
It’s just orders of magnitude more risky because of competition, because of ambition and because of regulation. And if you take the meme coins, it’s like when some rapper comes out with a meme coin, it’s like maybe it’ll peak when I hear about it. SHIB was created as the coin such that it had so many zeros after the decimal point that when you looked at it on the exchanges, it always showed zero, zero, zero, zero. And it wasn’t until six months after it got popular that they started expanding the display so you could see whether the price had changed.

Elon Musk

Lex Fridman
(03:32:27)
That’s speculation. Maybe you can correct me, but you’ve been critical of Elon Musk in the past in the crypto space. Where do you stand on Elon’s effect on Bitcoin or cryptocurrency in general these days?
Michael Saylor
(03:32:42)
I believe that Bitcoin is a massive breakthrough for the human race that will cure half the problems in the world and generate hundreds of trillions of dollars of economic value to the civilization. And I believe that it’s in an early stage, where many people don’t understand it and they’re afraid of it, and there’s FUD, and there’s uncertainty, there’s doubt and there’s fear, and there’s a very noisy crypto world, and there’s 15,000 other cryptos that are seeking relevance. And I think most of the FUD is actually fueled by the other crypto entrepreneurs. So the environmental FUD and the other types of uncertainty that’s around Bitcoin, generally, they’re not coming from legitimate environmentalists, they don’t come from legitimate critics. They actually are guerrilla marketing campaigns that are being financed and fueled by other crypto entrepreneurs because they have an interest in doing so.

(03:33:48)
So if I look at the constructive path forward, first, I think it’d be very constructive for corporations to embrace Bitcoin and build applications on top of it. You don’t need to fix it. There’s nothing wrong with it. When you put it on a layer two and a layer three, it moves a billion times a second at the speed of light. So every beautiful, cool DeFi application, every crypto application, everything you could imagine you might want to do, you can do with a legitimate company and a legitimate website or mobile application sitting on top of Bitcoin or lightning if you want to. So I think that to the extent that people do that, that’s going to be better for the world. If you consider what holds people back, I think it’s just misperceptions about what Bitcoin is. So I’m a big fan of just educating people. If you’re not going to commercialize it, then just educate people on what it is.

(03:34:59)
So for example, Bitcoin is the most efficient use of energy in the world by far. Right? Most people, they don’t necessarily perceive that or realize that, but if you were to take any metric energy intensity, you put $2 billion worth of electricity in the network every year and it’s worth $850 billion. There is no industry in the real world that is that energy efficient. Not only that energy efficient, it’s also the most sustainable industry. We do surveys, 58% of Bitcoin mining energy is sustainable. So there’s a very good story, in fact, every other industry, planes, trains, automobiles, construction, food, medicine, everything else, it’s less clean, less efficient. So-
Lex Fridman
(03:35:52)
So the basic debate was-
Michael Saylor
(03:35:54)
I wouldn’t say there is a debate. I would just say that to the extent that the Bitcoin community had any issue with Elon, it was just this environmental uncertainty that he fueled in a couple of his tweets, which I think just is very distracting.
Lex Fridman
(03:36:13)
Well, that was one of them, but I think it’s the Bitcoin maximalists, but generally the crypto community, what you call the crypto entrepreneurs are… It’s also they’re using it for investment, for speculation, and therefore get very passionate about people’s, celebrities, including you, famous people, saying positive stuff about any one particular crypto, a thing you can buy in Coinbase. And so they might be unhappy with Elon Musk that he’s promoting Bitcoin and then not, and then promoting Dogecoin, then not. There’s so much emotion tied up in the communication on this topic, and I think that’s where a lot of the-
Michael Saylor
(03:37:08)
Look, I don’t have a criticism of Elon Musk. He’s free to do whatever he wishes to do. It’s his life. In fact, Elon Musk is the second-largest supporter of Bitcoin in the world. So I think that the Bitcoin community tends to eat its own quite a bit. It tends to be very self-critical, and instead of saying, “Well, Elon is more supportive of Bitcoin than the other 10,000 people in the world with serious amounts of money,” they focus upon…
Lex Fridman
(03:37:43)
Yeah, this is strange. Eating your own is just…
Michael Saylor
(03:37:46)
I think he’s free to do what he wants to do. I think he’s done a lot of good for Bitcoin in putting it on the balance sheet of Tesla and holding it, and I think that sent a very powerful message.

Advice for young people

Lex Fridman
(03:38:00)
Do you have advice for young people? So you’ve had a heck of a life, you’ve done quite a lot of things, start before MIT, but starting with MIT, is there advice you have for young people, in high school and college, how to have a career that can be proud of, how to have a life they can be proud of?
Michael Saylor
(03:38:24)
I was asked by somebody for quick advice for his young children. He had twins, when they enter adulthood. He said, “Give me your advice for them,” in a letter. “I’m going to give it to them when they turn 21,” or something. I was at a party and then he handed me this sheet of paper, and I thought, “Oh, he wants me to write it down right now.” So I sat down, I started writing and I figured, “Well, what would you want to tell someone at age 21?”
Lex Fridman
(03:38:54)
You wrote it down.
Michael Saylor
(03:38:55)
So I wrote it down. Then I tweeted it and it’s sitting on Twitter. But I tell you what I said. I said, “My advice, if you’re entering adulthood, focus your energy, guard your time, train your mind, train your body, think for yourself, curate your friends, curate your environment, keep your promises, stay cheerful and constructive, and upgrade the world.” That was the 10.
Lex Fridman
(03:39:32)
Upgrade the world. That’s an interesting choice of words. Upgrade the world. Upgrade the world.
Michael Saylor
(03:39:39)
It’s like an engineer’s [inaudible 03:39:42]
Lex Fridman
(03:39:41)
It’s a very, yeah, it is a very engineering themed… Keep you promises too, that’s an interesting one.
Michael Saylor
(03:39:50)
I think most people suffer because they don’t focus. You got to figure out… I think the big risk in this world is there’s too much of everything.
Lex Fridman
(03:40:00)
Yeah.
Michael Saylor
(03:40:02)
You can sit and watch chess videos a hundred hours a week and you’ll never get through all the chess videos. There’s too much of every possible thing, too much of every good thing. So figuring out what you want to do, and then… Everything will suck up your time. There’s a hundred streaming channels to binge-watch on. You got to guard your time and then train your body, train your mind, and control who’s around you, control what surrounds you. So ultimately, in a world where there’s too much of everything, then your success-
Lex Fridman
(03:40:44)
It’s like those laser eyes, you have to focus on just a few of those things.
Michael Saylor
(03:40:51)
Yeah. I got a thousand opinions we could talk about, and I could pursue a thousand things, but I don’t expect to be successful. I’m not sure that my opinion in any of the 999 is any more valid than the leader of thought in that area. So how about if I just focus upon one thing and then deliver the best I can in the one thing. That’s the laser eye message. The rest get you distracted.
Lex Fridman
(03:41:22)
How do you achieve that? Do you find yourself, given where you are in life, having to say no a lot, or just focus comes natural when you just ignore everything around you? So how do you achieve that focus?
Michael Saylor
(03:41:36)
I think it helps if people know what you’re focused on.
Lex Fridman
(03:41:40)
So everything about you just radiates that, people know. People know this is-
Michael Saylor
(03:41:44)
If they know what you’re focused on, then you won’t get so many other things coming your way. If you dally or if you flirt with 27 different things, then you’re going to get approached by people in each of the 27 communities. Right?
Lex Fridman
(03:42:03)
You mentioned beginning a PhD, and given your roots at MIT, do you think there’s… There’s all kinds of journeys you can take to educate yourself. Do you think a PhD or school is still worth it, or is there other paths through life?
Michael Saylor
(03:42:23)
Is it worth it if you had to pay for it? Is it worth it if you spend the time on it?
Lex Fridman
(03:42:27)
The time and the money is a big cost?
Michael Saylor
(03:42:31)
I think…
Lex Fridman
(03:42:32)
Time, probably the bigger one. Right?
Michael Saylor
(03:42:34)
It seems clear to me that the world wants more specialists. It wants you to be an expert and to focus on in one area. It’s punishing generalists, jack-of-all-trades, especially people that are generalists in the physical realm. Because if you’re a specialist in the digital realm, you might very well… You’re the person with 700,000 followers on Twitter and you show them how to tie knots, or you’re the banjo player with 1.8 million followers, and when everybody types banjo, it’s you, right?
Lex Fridman
(03:43:13)
Yeah.
Michael Saylor
(03:43:14)
And so the world wants people that do something well, and then it wants to stamp out 18 million copies of them. And so that argues in favor of focus. Now, the definition of a PhD is someone with enough of an education that they’re capable of or have made… I guess to get a PhD, technically you have to have done a dissertation where you made a seminal contribution to the body of human knowledge. And if you haven’t done that, technically you have a master’s degree, but you’re not a doctor. So if you’re interested in any of the academic disciplines that a PhD would be granted for, then I can see that being a reasonable pursuit. But there are many people that are specialists… you know the Agadmator?
Lex Fridman
(03:43:14)
Yeah, yeah, yeah.
Michael Saylor
(03:44:06)
The Agadmator on YouTube. He’s the world’s greatest chess commentator.
Lex Fridman
(03:44:13)
Yeah.
Michael Saylor
(03:44:15)
I’ve watched his career, and he’s got progressively better, and he’s really good.
Lex Fridman
(03:44:18)
He’s going to love hearing this.
Michael Saylor
(03:44:19)
If the Agadmator ever hears this, I’m a big fan of the Agadmator. I have to cut myself off, right? Because otherwise you’ll watch the entire Paul Morphy saga for your weekend. But the point really is, YouTube is full of experts who are specialists in something, and they rise to the top of their profession. Twitter is too. The internet is. So I would advocate that you figure out what you’re passionate about and what you’re good at, and you do focus on it, especially if… If the thing that you’re doing can be automated… The problem is, back to that 500,000 algebra teacher type comment, the problem is if it is possible to be automated, then over time, someone’s probably going to automate it, that squeezes the state space of everybody else. Like after the lockdowns, it used to be there are all these local bands that played in bars and everybody went to the bar to see the local band, and then during the lockdown, you would have these six super groups, and they would all get 500,000 or a million followers, and all these smaller local bands just got no attention at all.
Lex Fridman
(03:45:47)
Well, the interesting thing is one of those 500,000 algebra teachers is likely to be part of the automation. So it’s an opportunity for you to think, “Where’s my field, my discipline, evolving into?” I talked to a bunch of librarians, just happen to be friends with librarians. Libraries will probably be evolving, and it’s up to you as a librarian to be one of the few that remain in the rubble.
Michael Saylor
(03:46:18)
If you’re going to give commentary on Shakespeare plays, I want you to basically do it for every Shakespeare play. I want you to be the Shakespeare dude. Because just like, Lex, you’re like..
Lex Fridman
(03:46:29)
I don’t know what kind of…
Michael Saylor
(03:46:31)
You’re the deep thinking podcaster, or you’re the podcaster that goes after the deep intellectual conversations. And once I get comfortable with you, and I like you, then I start binge-watching Lex. But if you changed your format through 16 different formats so that you could compete with 16 different other personalities on YouTube, you probably wouldn’t beat any of them, right? You would probably just kind of sink into the, you’re the number two or number three guy. You’re not the number one guy in the format. And I think the algorithm, right? The Twitter algorithm and the YouTube algorithm, they really reward the person that’s focused on message, consistent. The world wants somebody they can trust that’s consistent and reliable, and they kind of want to know what they’re getting into, because, and this is taken for granted maybe, but there’s 10 million people vying for every hour of your time. And so the fact that anybody gives you any time at all is a huge-
Lex Fridman
(03:47:44)
Is amazing.
Michael Saylor
(03:47:45)
… privilege, right? And you should be thanking them and you should respect their time.
Lex Fridman
(03:47:50)
It’s interesting. Everything you said is very interesting. But of course, from my perspective and probably from your perspective, my actual life has nothing to do with, it’s just being focused on stuff. In my case, it’s like focus on doing the thing I really enjoy doing and being myself, and not caring about anything else. I don’t care about views or likes or attention. And that just maintaining that focus is the way, from an individual perspective, you live that life. But yeah, it does seem that the world and technology is rewarding the specialization and creating bigger and bigger platforms for the different specializations. And then that lifts all boats actually, because the specializations get better and better and better at teaching people to do specific things, and they educate themselves. Just everybody gets more and more knowledgeable and more and more empowered.
Michael Saylor
(03:48:46)
The reward for authenticity more than offsets the specificity with which you pursue your mission. Another way to say it is “Nobody wants to read advertising.” If you were to spend a hundred million dollars advertising your thing, I probably wouldn’t want to watch it, but-
Lex Fridman
(03:49:07)
That’s so Fascinating.
Michael Saylor
(03:49:07)
Yeah.
Lex Fridman
(03:49:07)
That’s so fascinating.
Michael Saylor
(03:49:09)
We see the death of that. And so the commercial shows are losing their audiences, and the authentic specialist or the authentic artist are gaining their audience.

Mortality

Lex Fridman
(03:49:24)
And that’s a beautiful thing. Speaking of deep thinking, you’re just a human. Your life ends. You’ve accumulated so much wisdom, so much money, but the ride ends. Do you think about that? Do you ponder your death, your mortality? Are you afraid of it?
Michael Saylor
(03:49:46)
When I go, all my assets will flow into a foundation, and the foundation’s mission is to make education free for everybody forever. And if I’m able to contribute to the creation of a more perfect monetary system, then maybe that foundation will go on forever.
Lex Fridman
(03:50:09)
The idea, the foundation of the idea, so not just… Each of the foundations.
Michael Saylor
(03:50:17)
It’s not clear we’re on the s-curve of immortal life yet. That’s a biological question. And you asked that on some of your other interviews a lot. I think that we are on the threshold of immortal life for ideas or immortal life for certain institutions or computer programs. So if we can fix the money, then you can create a technically perfected endowment. And then the question really is what are your ideas? What do you want to leave behind? And so if it’s a park, then you endow the park, right? If it’s free education, you endow that. If it’s some other ethical idea, right?
Lex Fridman
(03:51:02)
Does it make you sad that there’s something that you’ve endowed, some very powerful idea of digital energy that you put out into, you help put it into the world, and that your mind, your conscious mind, will no longer be there to experience it. It’s just gone forever.
Michael Saylor
(03:51:27)
I’d rather think that the thing that Satoshi taught us is you should do your part during some phase of the journey, and then you should get out of the way. I think Steve Jobs said something similar to that effect in a very famous speech one day, which is “Death is a natural part of life and it makes way for the next generation.” And I think the goal is you upgrade the world, right? You leave it a better place, but you get out of the way. And I think when that breaks down, bad things happen. I think nature cleanses itself. There’s a cycle of life.

Meaning of life

Lex Fridman
(03:52:15)
And speaking of one of great people who did also get out of the way is George Washington. So hopefully when you get out of the way, nobody’s bleeding you to death in hope of helping you. What do you think, to do a bit of a callback, what do you think is the meaning of this whole thing? What’s the meaning of life? Why are we here? We talked about the rise of human civilization. It seems like we’re engineers at heart. We build cool stuff, better and better use of energy, channeling energy to be productive. Why? What’s it all for?
Michael Saylor
(03:52:55)
You’re getting metaphysical on me.
Lex Fridman
(03:52:57)
Very. There’s a beautiful boat to the left of us. Why do we do that? This boat that sailed the ocean? Then we build models of it to celebrate great engineering of the past.
Michael Saylor
(03:53:08)
To engineer is divine. You can make lots of arguments as why we’re here. We’re either here to entertain ourselves or we’re here to create something that’s beautiful or something that’s functional. I think if you’re an engineer, you entertain yourself by creating something that’s both beautiful and functional. So I think all three of those things, it’s entertaining, but it’s ethical. You got to admire the first person that built a bridge, crossing a chasm, or the first person to work out the problem of how to get running water to a village, or the first person to figure out how to dam up a river, or mastered agriculture, or the guy that figured out how to grow fruit on trees or created orchards, and maybe one day had 10 fruit trees. He is pretty proud of himself.
Lex Fridman
(03:54:03)
So that’s functional. There is also something to that, just like you said, that’s just beautiful. It does get you closer to, like you said, the divine. Something… When you step back and look at the entirety of it. A collective of humans, using a beautiful invention or creation, or just something about this instrument is creating a beautiful piece of music, that seems just right. That’s what we’re here for. Whatever the divine is, it seems like we’re here for that. And I, of course, love talking to you because from the engineering perspective, the functional is ultimately the mechanism towards the beauty.
Michael Saylor
(03:54:53)
Isn’t there something beautiful about making the world a better place for people that you love, your friends, your family, or yourself? When you think about the entire arc of human existence, and you roll the clock back 500,000 years, and you think about every struggle of everyone that came before us, and everything they had to overcome in order to put you here right now, you got to admire that, right? You got to respect that.
Lex Fridman
(03:55:30)
That’s a heck of a gift they gave us. It’s also a heck of a responsibility. Don’t screw it up.
Michael Saylor
(03:55:39)
If I dropped you 500,000 years ago and I said, figure out steel refining, or figure out silicon chips, fab reproduction, or whatever it is.
Lex Fridman
(03:55:52)
To fly, or fire.
Michael Saylor
(03:55:53)
You’d be like, “Ugh.” And so now we’re here, and I guess the way you repay them is you fix everything in front of your face you can. And that means, to someone like Elon, it means get us off the planet. To someone like me, it’s like, I think fix the energy and the system,
Lex Fridman
(03:56:14)
And that gives me hope. Michael, this was an incredible conversation. You’re an incredible human. It’s a huge honor you would sit down with me. Thank you so much for talking today.
Michael Saylor
(03:56:23)
Yeah, thanks for having me, Lex.
Lex Fridman
(03:56:26)
Thanks for listening to this conversation with Michael Saylor. To support this podcast, please check out our sponsors in the description. And now let me leave you with a few words from Francis Bacon “Money is a great servant, but a bad master.” Thank you for listening and hope to see you next time.

Transcript for Nick Lane: Origin of Life, Evolution, Aliens, Biology, and Consciousness | Lex Fridman Podcast #318

This is a transcript of Lex Fridman Podcast #318 with Nick Lane.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Introduction

Nick Lane
(00:00:00)
Well, the source of energy at the origin of life is the reaction between carbon dioxide and hydrogen. And amazingly, most of these reactions are exergonic, which is to say they release energy. If you have hydrogen and CO2, and you put them together in a Falcon tube and you warm it up to, say, 50 degrees centigrade, and you put in a couple of catalysts and you shake it, nothing’s going to happen. But thermodynamically that is less stable. Two gases, hydrogen and CO2, is less stable than cells. What should happen is you get cells coming out. Why doesn’t that happen is because of the kinetic barriers. That’s where you need the spark.
Lex Fridman
(00:00:38)
The following is a conversation with Nick Lane, a biochemist at University College London, and author of some of my favorite books on biology, science, and life ever written, including his two most recent titles, Transformer: The Deep Chemistry of Life and Death, and The Vital Question: Why Is Life the Way It Is? This is the Lex Fridman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Nick Lane.

Origin of life

Lex Fridman
(00:01:09)
Let’s start with perhaps the most mysterious, the most interesting question that we little humans can ask of ourselves. How did life originate on earth?
Nick Lane
(00:01:21)
You could ask anybody working on the subject, and you’ll get a different answer from all of them. They will be pretty passionately held opinions, and they’re opinions grounded in science, but they’re still really at this point, they’re opinions. Because there’s so much stuff to know, that all we can ever do is get a small slice of it, and it’s the context which matters. So, I can give you my answer. My answer is, from a biologist’s point of view, that has been missing from the equation over decades, which is: well, what does life do on earth? Why is it this way? Why is it made of cells? Why is it made of carbon? Why is it powered by electrical charges on membranes? There’s all these interesting questions about cells, that if you then look to see: well, is there an environment on earth, on the early earth 4 billion years ago that kind of matches the requirements of cells?

(00:02:16)
Well, there is one. There’s a very obvious one. It’s basically created by whenever you have a wet rocky planet, you get these hydrothermal vents, which generate hydrogen gas in bucket loads and electrical charges on kind of cell-like pores that can drive the kind of chemistry that life does. So, it seems so beautiful and so obvious, that I’ve spent the last 10 years or more trying to do experiments. It turns out to be difficult, of course. Everything’s more difficult than you ever thought it was going to be, but it looks, I would say, more true rather than less true over that ten-year period. I think I have to take a step back every now and then and think, “Hang on a minute. Where is this going?” I’m happy it’s going in a sensible direction.

(00:03:02)
And I think then you have these other interesting dilemmas. I’m often accused of being too focused on life on earth, too kind of narrow-minded and inward looking, you might say. I’m talking about carbon, I’m talking about cells. And maybe you or plenty of people can say to me, “Oh, yeah, but life can be anything. I have no imagination.” And maybe they’re right, but unless we can say why life here is this way, and if those reasons are fundamental reasons or if they’re just trivial reasons, then we can’t answer that question. So, I think they’re fundamental reasons, and I think we need to worry about them.
Lex Fridman
(00:03:40)
Yeah, there might be some deep truth to the puzzle here on earth that will resonate with other puzzles elsewhere that will… solving this particular puzzle will give us that deeper truth. So, what do this puzzle… You said vents, hydrogen, wet. So, chemically, what is the potion here? How important is oxygen? You wrote a book about this.
Nick Lane
(00:04:07)
Yeah. And I actually just came straight here from a conference where I was chairing a session on whether oxygen matters or not in the history of life. Of course, it matters, but it matters most to the origin of life to be not there. As I see it, we have this… Life is made of carbon basically, primarily, organic molecules with carbon-carbon bonds. And the building block, the Lego brick that we take out of the air or take out of the oceans is carbon dioxide. And to turn carbon dioxide into organic molecules, we need to strap on hydrogen. And so we need… And this is basically what life is doing, it’s hydrogenating carbon dioxide. It’s taking the hydrogen that bubbles out of the earth in these hydrothermal vents, and it sticks it on CO2. And it’s kind of really as simple as that. And actually thermodynamically, the thing that I find most troubling is that if you do these experiments in the lab, the molecules you get are exactly the molecules that we see at the heart of biochemistry and the heart of life.
Lex Fridman
(00:05:10)
Is there something to be said about the earliest origins of that little potion, that chemical process? What really is the spark there?
Nick Lane
(00:05:24)
There isn’t a spark. There is a continuous chemical reaction. And there is kind of a spark, but it’s a continuous electrical charge, which helps drive that reaction.
Lex Fridman
(00:05:37)
So, literally spark.
Nick Lane
(00:05:39)
Well, the charge at least. But yes, a spark in that sense is… We tend to think in terms of Frankenstein. We tend to think in terms of electricity, and one moment you zap something and it comes alive. And what does that really mean? It’s come alive. And now what’s sustaining it? Well, we are sustained by oxygen, by this continuous chemical reaction. And if you put a plastic bag on your head, then you’ve got a minute or something before it’s all over.
Lex Fridman
(00:06:07)
So, it’s some way of being able to leverage a source of energy?
Nick Lane
(00:06:11)
Well, the source of energy at the origin of life is the reaction between carbon dioxide and hydrogen. And amazingly, most of these reactions are exergonic, which is to say they release energy. If you have hydrogen and CO2 and you put them together in a Falcon tube and you warm it up to say 50 degrees centigrade, and you put in a couple of catalysts and you shake it, nothing’s going to happen. But thermodynamically that is less stable, two gases, hydrogen and CO2, is less stable than cells. What should happen is you get cells coming out. So, why doesn’t that happen? It’s because of the kinetic barriers. That’s where you need the spark.
Lex Fridman
(00:06:49)
Is it possible that life originated multiple times on earth? The way you describe it, you make it sound so easy.
Nick Lane
(00:06:57)
There’s a long distance to go from those first bits of prebiotic chemistry to, say, molecular machines, like ribosomes.
Lex Fridman
(00:07:05)
Is that the first thing that you would say is life? If I introduce the two of you at a party, you would say that’s a living thing?
Nick Lane
(00:07:15)
I would say as soon as we introduce genes information into systems that are growing anyway, so I would talk about growing protocells, as soon as we introduce even random bits of information into there. I’m thinking about RNA molecules, for example. It doesn’t have to have any information in it. It can be completely random sequence, but if it’s introduced into a system which is in any case growing and doubling itself and reproducing itself, then any changes in that sequence that allow it to do so better or worse are now selected by perfectly normal natural selection.
Lex Fridman
(00:07:51)
But it’s a system-
Nick Lane
(00:07:52)
So, that’s when it becomes alive to my mind.
Lex Fridman
(00:07:54)
… that’s encompassed into an object, that keeps information, and evolves that information over time or changes that information over time.
Nick Lane
(00:08:06)
Yes, exactly.
Lex Fridman
(00:08:06)
In response to the enzymes.
Nick Lane
(00:08:07)
So, it’s always part of a cell system from the very beginning.
Lex Fridman
(00:08:11)
So, is your sense that it started only once because it’s difficult or is it possible it started in multiple occasions on earth?
Nick Lane
(00:08:18)
It’s possible it started multiple occasions. There’s two provisos to that. One of them is oxygen makes it impossible really for life to start. So, as soon as we’ve got oxygen in the atmosphere, then life isn’t going to keep starting over. So, I often get asked by people, “Why can’t we have life starting? If it’s so easy, why can’t life start in these vents now?” And the answer is, if you want hydrogen to react with CO2 and there’s oxygen there, hydrogen reacts with oxygen instead. You get an explosive reaction that way. It’s rocket fuel. So, it’s never going to happen. But for the origin of life earlier than that, all we know is that there’s a single common ancestor for all of life. There could have been multiple origins, and they all just disappeared.

(00:09:03)
But there’s a very interesting deep split in life between bacteria and what are called archaea, which look just the same as bacteria. And they’re not quite as diverse, but nearly, and they are very different in their biochemistry. And so any explanation for the origin of life has to account, as well, for why they’re so different and yet so similar. And that makes me think that life probably did arise only once.
Lex Fridman
(00:09:29)
Can you describe the difference that’s interesting there, how they’re similar, how they’re different?
Nick Lane
(00:09:34)
Well, they’re different in their membranes primarily. They’re different in things like DNA replication. They use completely different enzymes, and the genes behind it for replicating DNA.
Lex Fridman
(00:09:44)
So, they both have membranes, both have DNA replication.
Nick Lane
(00:09:48)
Yes.
Lex Fridman
(00:09:48)
The process of that is different.
Nick Lane
(00:09:51)
They both have DNA. The genetic code is identical in them both. The way in which it’s transcribed into RNA, into the copy of a gene, and the way that that’s then translated into a protein, that’s all basically the same in both these groups, so they clearly share a common ancestor. It’s just that they’re different in fundamental ways as well. And if you think about, “Well, what kind of processes could drive that divergence very early on?” I can think about it in terms of membranes, in terms of the electrical charges on membranes, and it’s that makes me think that there were probably many unsuccessful attempts and only one really successful attempt.
Lex Fridman
(00:10:30)
Can you explain why that divergence makes you think there’s one common ancestor? Can you describe that intuition? I’m a little bit unclear about why the leap from the divergence means there’s one. Do you mean the divergence indicates that there was a big invention at that time from one source?
Nick Lane
(00:10:50)
Yes. As I imagine it, you have a common ancestor living in a hydrothermal vent. Let’s say there are millions of vents and millions of potential common ancestors living in all of those vents, but only one of them makes it out first. Then you could imagine that that cell is then going to take over the world and wipe out everything else. And so what you would see would be a single common ancestor for all of life, but with lots of different vent systems, all vying to create the first life forms, you might say.
Lex Fridman
(00:11:25)
So, this thing is a cell, a single-cell organism?
Nick Lane
(00:11:28)
Well, we’re always talking about populations of cells, but yes, these are single-celled organisms.
Lex Fridman
(00:11:33)
But the fundamental life form is a single cell. So, they’re always together, but they’re alone together. There’s a machinery in each one individual component, that if left by itself would still work, right?
Nick Lane
(00:11:50)
Yes, yes, yes. It’s the unit of selection is a single cell. But selection operates over generations and changes over generations in populations of cells, so it would be impossible to say that a cell is the unit of selection in the sense that unless you have a population, you can’t evolve, you can’t change.
Lex Fridman
(00:12:07)
Right, but there was one Chuck Norris, that’s an American reference, cell that made it out of the vents or the first one?
Nick Lane
(00:12:19)
So, imagine then that there’s one cell gets out and it takes over the world.
Lex Fridman
(00:12:23)
It gets out in the water. It’s floating around.
Nick Lane
(00:12:25)
Well, deep in the ocean somewhere. But actually two cells got out. And they appear to have got out from the same vent because they both share the same code and everything else. So unless all… We’ve got a million different common ancestors in all these different vents, so either they all have the same code, and two cells spontaneously emerged from different places, or two different cells, fundamentally different cells, came from the same place. So, either way, what are the constraints that say, “Not just one came out or not half a million came out, but two came out.”? That’s kind of a bit strange. So, how did they come out? Well, they come out because what you’re doing inside a vent is you’re relying on the electrical charges down there to power this reaction between hydrogen and CO2 to make yourself grow.

(00:13:17)
And when you leave the vent, you’ve got to do that yourself. You’ve got to power up your own membrane. And so the question is: well, how do you power up your own membrane? And the answer is, well, you need to pump. You need to pump ions to give an electrical charge on the membrane. So, what do the pumps look like? Well, the pumps look different in these two groups. It’s as if they both emerge from a common ancestor, and as soon as you’ve got that ancestor, things move very quickly and divergently. Why does the DNA replication look different? Well, it’s joined to the membrane. The membranes are different. The DNA replication is different because it’s joined to a different kind of membrane. So, there’s interesting… This is detail you may say, but it’s also fundamental because it’s about the two big divergent groups of life on earth that seemed to have diverged really early on.
Lex Fridman
(00:14:03)
It all started from one organism, and then that organism just start replicating the heck out of itself with some mutation of the DNA. So, there’s a competition through the process of evolution. They’re not trying to beat each other up. They’re just trying to live life.
Nick Lane
(00:14:24)
They are just replicators.
Lex Fridman
(00:14:25)
Yeah. Well, let’s not minimize their… They’re just trying to chill. They’re trying to relax up in the… But there’s no sense of trying to survive. They’re replicating-
Nick Lane
(00:14:36)
There’s no sense in which they’re trying to do anything. They’re just kind of an outgrowth of the earth, you might say.
Lex Fridman
(00:14:42)
Of course, the aliens would describe us humans in that same way.
Nick Lane
(00:14:46)
They might be right.
Lex Fridman
(00:14:47)
It’s primitive life. It’s just ants that are hairless or mostly hairless.
Nick Lane
(00:14:53)
Overgrown ants.

Panspermia

Lex Fridman
(00:14:54)
Overgrown ants. Okay. What do you think about the idea of panspermia, the theory that life did not originate on earth and was planted here from outer space or pseudo-panspermia, which is like the basic ingredients, the magic that you mentioned was planted here from elsewhere in space?
Nick Lane
(00:15:14)
I don’t find them helpful. That’s not to say they’re wrong. So pseudo-transpermia, the idea that the chemicals, the amino acids, the nucleotides are being delivered from space. Well, we know that happens. It’s unequivocal. They’re delivered on meteorites, comets and so on. So, what do they do next? That’s, to me, the question. Well, what they do is they stock a soup, presumably they land in a pond or in an ocean or wherever they land. And then a best possible case scenario is you end up with a soup of nucleotides and amino acids. And then you have to say, “So now what happens?”

(00:15:46)
And the answer is, “Oh, well, they have to go ‘bloop’ and become alive.” So, how did they do that? You may as well say that a miracle happened. I don’t believe in soup. I think what we have in a vent is a continuous conversion, a continuous growth, a continuous reaction, a continuous converting a flow of molecules into more of yourself, you might say, even if it’s a small bit. So, you’ve got a kind of continuous self-organization and growth from the very beginning. You never have that in a soup.
Lex Fridman
(00:16:17)
Isn’t the entire universe and living organisms in the universe, isn’t it just soup all the way down? Isn’t it all soup?
Nick Lane
(00:16:26)
No, no, soup almost by definition doesn’t have a structure.
Lex Fridman
(00:16:29)
But soup is a collection of ingredients that are randomly [inaudible 00:16:34].
Nick Lane
(00:16:34)
But they’re not random. We have chemistry going on here. We have membranes forming which are effectively oil-water interactions.
Lex Fridman
(00:16:44)
There’s a process going on. Okay, so it feels like there’s a direction to a… directed process.
Nick Lane
(00:16:47)
There are directions to processes, yeah. And if you’re starting with CO2, and you’ve got two reactive fluids being brought together and they react, what are they going to make? Well, they make carboxylic acids, which include the fatty acids that make up the cell membranes. And they form directly into bilayer membranes. They form like soap bubbles. It’s spontaneous organization caused by the nature of the molecules. And those things are capable of growing and are capable, in effect, of being selected. Even before there are genes, we have this. So, we have a lot of order, and that order is coming from thermodynamics. And the thermodynamics is always about increasing the entropy of the universe, but if you have oil and water and they’re separating, you are increasing the entropy of the universe, even though you’ve got some order, which is the soap and the water are not miscible.

(00:17:37)
To come back to your first question about panspermia properly, that just pushes the question somewhere else, even if it’s true. Maybe life did start on Earth by panspermia, but so what are the principles that govern the emergence of life on any planet? It’s an assumption that life started here, and it’s an assumption that it started in a hydrothermal vent or it started in a terrestrial geothermal system. The question is: can we work out a testable sequence of events that would lead from one to the other one? And then test it, and see if there’s any truth in it or not. With panspermia, you can’t do any of that.
Lex Fridman
(00:18:14)
But the fundamental question of panspermia is: do we have the machine here on earth to build life?
Nick Lane
(00:18:21)
Not yet.
Lex Fridman
(00:18:23)
Is the vents enough? Is oxygen and hydrogen, and whatever the heck else we want, and some source of energy and heat, is that enough to build life?
Nick Lane
(00:18:36)
Yes.
Lex Fridman
(00:18:37)
Well, of course you would say that as a human, but there could be aliens right now chuckling at that idea. Maybe you need some special sauce, special elsewhere sauce. So, your sense is we have everything here.
Nick Lane
(00:18:54)
This is precisely the question. When I’m talking in schools, I like to start out with the idea of: we can make a time machine. We go back 4 billion years, and we go to these environments that people talk about. We go to a deep sea hydrothermal vent, we go to a kind of Yellowstone Park type place/environment, and we find some slime that looks like we can test it. It’s made of organic molecules. It’s got a structure which is not obviously cells, but is this a stepping stone on the way to life or not? How do we know? Unless we’ve got an intellectual framework that says, “This is a stepping stone, and that’s not a step…” We’d never know. We wouldn’t know which environment to go to, what to look for, how to say this. So, all we can ever hope for, because we’re never going to build that time machine, is to have an intellectual framework that can explain step by step, experiment by experiment, how we go from a sterile inorganic planet to living cells as we know them.

(00:19:52)
And in that framework, every time you have a choice, it could be this way or it could be that way, or there’s lots of possible forks down that road, did it have to be that way? Could it have been the other way, and would that have given you life with very different properties? And so if you come up with… It’s a long hypothesis, because as I say, we’re going from really simple prebiotic chemistry all the way through to genes and molecular machines. That’s a long, long pathway. And nobody in the field would agree on the order in which these things happened, which is not a bad thing because it means that you have to go out and do some experiments and try and demonstrate that it’s possible or not possible.

What is life?

Lex Fridman
(00:20:29)
It’s so freaking amazing that it happened though. It feels like there’s a direction to the thing. Can you try to answer from a framework of: what is life? So, you said there’s some order and yet there’s complexity, so it’s not perfectly ordered, it’s not boring. There’s still some fun in it. And it also feels like the processes have a direction through the selection mechanism. They seem to be building something, always better, always improving. Maybe it’s-
Nick Lane
(00:21:15)
That’s a perception.
Lex Fridman
(00:21:16)
That’s our romanticization of things are always better. Things are getting better. We’d like to believe that.
Nick Lane
(00:21:23)
You think about the world from the point of view of bacteria, and bacteria are the first things to emerge from whatever environment they came from, and they dominated the planet very, very quickly, and they haven’t really changed. 4 billion years later they look exactly the same.
Lex Fridman
(00:21:36)
So, about 4 billion years ago, bacteria started to really run the show, and then nothing happened for a while?
Nick Lane
(00:21:44)
Nothing happened for 2 billion years. Then after 2 billion years, we see another single event, origin, if you like, of our own type of cell, the eukaryotic cells, so cells with a nucleus and lots of stuff going on inside. Another singular origin. It only happened once in the history of life on earth. Maybe it happened multiple times, and there’s no evidence everything just disappeared. But we have to at least take it seriously that there’s something that stops bacteria from becoming more complex, because they didn’t. That’s a fact, that they emerged 4 billion years ago, and something happened 2 billion years ago, but the bacteria themselves didn’t change. They remain bacterial. So, there is no necessary trajectory towards great complexity in human beings at the end of it. It’s very easy to imagine that without photosynthesis arising or without eukaryotes arising, that the planet could be full of bacteria and nothing else.
Lex Fridman
(00:22:36)
But we’ll get to that, because that’s a brilliant invention, and there’s a few brilliant inventions along the way. But what is life? If you were to show up on earth, but take that time machine, and you said, asking yourself the question, “Is this a stepping stone towards life?” As you step along when you see the early bacteria, how would you know it’s life? And then this is a really important question when you go to other planets and look for life: what is the framework of telling a difference between a rock and a bacteria?
Nick Lane
(00:23:12)
The question’s kind of both impossible to answer and trivial at the same time. And I don’t like to answer it because I don’t think there is an answer. I think we’re trying to describe-
Lex Fridman
(00:23:22)
Those are the most fun questions. What do you mean, there’s no answer?
Nick Lane
(00:23:24)
There is no answer. There’s lot… There are at least 40 or 50 different definitions of life out there, and most of them are, well-
Lex Fridman
(00:23:31)
Not convincing.
Nick Lane
(00:23:32)
… obviously bad in one way or another. I can never remember the exact words that people use, but there’s NASA working definition of life, which more or less says, “A self-sustaining system capable of evolution,” or something along those lines. And I immediately have a problem with the words self-sustaining, because it’s sustained by the environment. And I know what they’re getting at. I know what they’re trying to say, but I pick a hole in that. And there’s always wags who say, “But by that definition, a rabbit is not alive. Only a pair of rabbits would be alive because a single rabbit is incapable of copying itself.” There are all kinds of pedantic, silly, but also important objections to any hypothesis.

(00:24:19)
The real question is: what is… We can argue all day, or people do argue all day about: is a virus alive or not? And it depends on the content. In fact, most biologists could not agree about that. So, then what about a jumping gene, a retro element or something like that? It’s even simpler than a virus, but it’s capable of converting its environment into a copy of itself. And that’s about as close… This is not a definition, but this is a kind of description of life, is that it’s able to parasitize the environment, and that goes for plants as well as animals and bacteria and viruses, to make a relatively exact copy of themselves, informationally exact copy of themselves.
Lex Fridman
(00:25:04)
By the way, it doesn’t really have to be a copy of itself, it just has to be… you have to create something that’s interesting. The way evolution is, so it is extremely powerful process of evolution, which is basically make a copy of yourself and sometimes mess up a little bit.
Nick Lane
(00:25:24)
Yes. Absolutely.
Lex Fridman
(00:25:25)
Okay. That seems to work really well. I wonder if it’s possible to-
Nick Lane
(00:25:28)
Mess up big time?
Lex Fridman
(00:25:29)
Mess up big time as a standard, as the default.
Nick Lane
(00:25:32)
It’s called the hopeful monster, and-
Lex Fridman
(00:25:34)
It doesn’t work?
Nick Lane
(00:25:36)
In principle, it can. Actually, it turns out, I would say that this is due a reemergence. There’s some amazing work from Michael Levin. I don’t know if you came across him, but if you haven’t interviewed him, you should interview him.
Lex Fridman
(00:25:49)
Yeah, in Boston. I’m talking to him in a few days.
Nick Lane
(00:25:53)
Oh, fantastic.
Lex Fridman
(00:25:56)
So I mentioned off… There’s two people that, if I may mention. Andrej Karpathy is a friend who’s really admired in the AI community, said, “You absolutely must talk to Michael and to Nick.” So, of course I’m a huge fan of yours, so I’m really fortunate that we can actually make this happen. Anyway, you were saying.
Nick Lane
(00:26:16)
Well, Michael Levin is doing amazing work basically about the way in which electrical fields control development. And he’s done some work with Planarian worms or flatworms, where he’ll tell you all about this, so I won’t say any more than the minimum, but basically you can cut their head off and they’ll redevelop a new head. But the head that they develop depends. If you knock out just one iron pump in a membrane, so you change the electrical circuitry just a little bit, you can come up with a completely different head. It can be a head which is similar to those that diverged 150 million years ago or it can be a head which no one’s ever seen before, a different kind of head. Now that is really, you might say, a hopeful monster.

(00:26:59)
This is a kind of leap into a different direction. The only question for natural selection is: does it work? Is the change itself feasible as a single change? And the answer is yes. It’s just a small change to a single gene. And the second thing is it gives rise to a completely different morphology. Does it work? And if it works, that can easily be a shift. But for it to be a speciation, for it to continue, for it to give rise to a different morphology over time, then it has to be perpetuated. So that shift, that change in that one gene has to work well enough that it is selected and it goes on.
Lex Fridman
(00:27:41)
And copied enough times to where you can really test it.
Nick Lane
(00:27:44)
So, the likelihood it would be lost, but there’ll be some occasions where it survives. And yes, the idea that we can have sudden fairly abrupt changes in evolution, I think it’s time for rebirth.
Lex Fridman
(00:27:54)
What about this idea that… kind of trying to mathematize a definition of life and saying how many steps… the shortest amount of steps it takes to build the thing, almost like an engineering view of it? Do you find that at all compelling?
Nick Lane
(00:28:10)
I like that view, because I think that, in a sense, that’s not very far away from what a hypothesis needs to do to be a testable hypothesis for the origin of life. You need to spell out, here’s each step and here’s the experiment to do for each step. The idea that we can do it in the lab, some people say, “Oh, we’ll have created life within five years.” But ask them what they mean by life. We have a planet 4 billion years ago with these vent systems across the entire surface of the planet, and we have millions of years if we want it. I have a feeling that we’re not talking about millions of years. I have a feeling we’re talking about maybe millions of nanoseconds or picoseconds. We’re talking about chemistry, which is happening quickly.

(00:28:53)
But we still need to constrain those steps, but we’ve got a planet doing similar chemistry. You asked about a trajectory. The trajectory is the planetary trajectory. The planet has properties. It’s basically… It’s got a lot of iron at the center of it, it’s got a lot of electrons at the center of it. It’s more oxidized on the outside, partly because of the sun, and partly because the heat of volcanoes puts out oxidized gases. So, the planet is a battery. It’s a giant battery. And we have a flow of electrons going from inside to outside in these hydrothermal vents, and that’s the same topology that a cell has. A cell is basically just a micro-version of the planet.

(00:29:34)
And there is a trajectory in all of that, and there’s an inevitability that certain types of chemical reaction are going to be favored over others. And there’s an inevitability in what happens in water, the chemistry that happens in water. Some will be immiscible with water and will form membranes and will form insoluble structures. Water’s a… Nobody really understands water very well. And it’s another big question for experiments on the origin of life: what do you put it in? What kind of structure do we want to induce in this water? Because the last thing it’s likely to be is just kind of bulk water.
Lex Fridman
(00:30:11)
How fundamental is water to life, would you say?
Nick Lane
(00:30:14)
I would say pretty fundamental. I wouldn’t like to say it’s impossible for life to start any other way, but water is everywhere. Water’s extremely good at what it does, and carbon works in water especially well. And carbon is everywhere. So, those things together make me think probabilistically, if we found 1,000 life forms, 995 of them would be carbon-based and living in water.
Lex Fridman
(00:30:42)
Now the reverse question. If you found a puddle of water elsewhere and some carbon… No, just a puddle of water. Is a puddle of water a pretty good indication that life either exists here or has once existed here?
Nick Lane
(00:31:00)
No.
Lex Fridman
(00:31:02)
So, it doesn’t work the other way.
Nick Lane
(00:31:05)
I think you need a living planet. You need a planet which is capable of turning over its surface. It needs to be a planet with water. It needs to be capable of bringing those electrons from inside to the outside. It needs to turn over its surface. It needs to make that water work and turn it into hydrogen. So, I think you need a living planet, but once you’ve got the living planet, I think the rest of it is kind of thermodynamics all the way.
Lex Fridman
(00:31:29)
So, if you were to run Earth over a million times up to this point, maybe beyond, to the end, let’s run it to the end, how much variety is there? You kind of spoke to this trajectory, that the environment dictates chemically, I don’t know in which other way, spiritually, dictates the direction of this giant machine, that seems chaotic, but it does seem to have order in-
Lex Fridman
(00:32:00)
… seems chaotic, but it does seem to have order in the steps it’s taking. How often will bacteria emerge? How often will something like humans emerge? How much variety do you think there would be?
Nick Lane
(00:32:15)
I think at the level of bacteria, not much variety. I think we would get how many times you say you want to run it a million times? I would say at least a few hundred thousand will get bacteria again.
Lex Fridman
(00:32:28)
Oh, wow. Nice.
Nick Lane
(00:32:29)
Because I think there’s some level of inevitability that a wet, rocky planet will give rise through the same processes to something very… I think this is not something I would have thought a few years ago, but working with a PhD student of mine, Stuart Harrison, he’s been thinking about the genetic code and we’ve just been publishing on that. There are patterns that he has discerned in the code that if you think about them in terms of we start with CO2 and hydrogen and these are the first steps of biochemistry, you come up with a code which is very similar to the code that we see.

(00:33:03)
So, it wouldn’t surprise me any longer if we found life on Mars and it had a genetic code that was not very different to the genetic code that we have here without it just being transferred across, there’s some inevitability about the whole of the beginnings of life, in my view.
Lex Fridman
(00:33:18)
That’s really promising because if the basic chemistry is tightly linked to the genetic code, that means we can interact with other life if it exists out there.
Nick Lane
(00:33:30)
Well, that’s potentially the guess, yes.
Lex Fridman
(00:33:32)
That’s really exciting if that’s the case. Okay. But then bacteria-
Nick Lane
(00:33:36)
Then we’ve got bacteria.
Lex Fridman
(00:33:37)
Yeah.
Nick Lane
(00:33:39)
How easy is photosynthesis? Much harder, I would say.

Photosynthesis

Lex Fridman
(00:33:44)
Let’s actually go there. Let’s go through the inventions.
Nick Lane
(00:33:47)
Yeah.
Lex Fridman
(00:33:49)
What is photosynthesis and why is it hard?
Nick Lane
(00:33:52)
Well, there are different forms. I mean, basically you’re taking hydrogen and you’re sticking it onto CO2 and it’s powered by the sun. The question is where are you taking the hydrogen from? And in photosynthesis that we know in plants, it’s coming from water. So you’re using the power of the sun to split water, take out the hydrogen, stick it onto CO2, and the oxygen is a waste product and you just throw it out, throw it away. So it’s the single greatest planetary pollution event in the whole history of the earth.
Lex Fridman
(00:34:21)
The pollutant being oxygen?
Nick Lane
(00:34:22)
Yes. Yeah. It also made possible animals, you can’t have large active animals without an oxygenated atmosphere, at least not in the sense that we know on earth.
Lex Fridman
(00:34:33)
So that’s a really big invention in the history of earth.
Nick Lane
(00:34:35)
Huge invention, yes. And it happened once, there’s a few things that happened once on earth and you’re always stuck with this problem once it happened, did it become so good so quickly that it precluded the same thing happening ever again? Or are there other reasons? And we really have to look at each one in turn and think, “Why did it only happen once?” In this case, it’s really difficult to split water, it requires a lot of power and that power you’re effectively separating charge across a membrane. And the way in which you do it, if it doesn’t all rush back and kind of cause an explosion right at the site requires really careful wiring.

(00:35:11)
And that wiring, it can’t be easy to get it right because the plants that we see around us, they have chloroplasts. Those chloroplasts were cyanobacteria ones. Those cyanobacteria are the only group of bacteria that can do that type of photosynthesis, so there’s plenty of opportunity but-
Lex Fridman
(00:35:29)
There’s not many bacteria. So who invented photosynthesis?
Nick Lane
(00:35:31)
The cyanobacteria or their ancestors.
Lex Fridman
(00:35:34)
And there’s not many-
Nick Lane
(00:35:36)
No other bacteria can do what’s called oxygenic photosynthesis. Lots of other bacteria can split. I mean, you can take your hydrogen from somewhere else, you can take it from hydrogen sulphide bubbling out of a hydrothermal vent, grab your two hydrogens, the sulphur is the waste now.

(00:35:52)
You can do it from iron, you can take electrons… So the early oceans, were probably full of iron, you can take an electron from ferrous iron, so Iron 2+ and make it Iron 3+, which now precipitates as rust, and you take a proton from the acidic early ocean, stick it there now you’ve got a hydrogen atom, stick it onto CO2, you’ve just done the trick. The trouble is you bury yourself in rusty iron and with sulphur can bury yourself in sulphur. One of the reasons oxygenic photosynthesis is so much better is that the waste product is oxygen, which just bubbles away.
Lex Fridman
(00:36:26)
That seems extremely unlikely and it’s extremely essential for the evolution of complex organisms because of all the oxygen.
Nick Lane
(00:36:36)
Yeah, and that didn’t accumulate quickly either.
Lex Fridman
(00:36:39)
So it’s converting, what is it? It’s converting energy from the sun and the resource of water into the resource needed for animals?
Nick Lane
(00:36:50)
Both resources needed for animals. We need to eat and we need to burn the food, and we’re eating plants which are getting their energy from the sun and we’re burning it with their waste product, which is the oxygen. So there’s a lot of circularity in that, but without an oxygenated planet, you couldn’t really have predation. You can have animals, but you can’t really have animals that go around and eat each other. You can’t have ecosystems as we know them.

Prokaryotic vs eukaryotic cells

Lex Fridman
(00:37:19)
Well, let’s actually step back. What about eukaryotic versus prokaryotic cells, prokaryotes, what are each of those and how big of an invention is that?
Nick Lane
(00:37:31)
I personally think that’s the single biggest invention in the whole history of life.
Lex Fridman
(00:37:34)
Exciting. So what are they? Can you explain?
Nick Lane
(00:37:39)
Yeah. So I mentioned bacteria and archaea, these are both prokaryotes. They’re basically small cells that don’t have a nucleus. If you look at them under a microscope, you don’t see much going on. If you look at them under a super resolution microscope, then they’re fantastically complex. In terms of their molecular machinery, they’re amazing. In terms of their morphological appearance under a microscope, they’re really small and really simple.

(00:38:03)
The earliest life that we can physically see on the planet are stromatolites, which are made by things like cyanobacteria and they’re large superstructures, effectively biofilms plated on top of each other, and you end up with quite large structures that you can see in the fossil record. But they never came up with animals, they never came up with plants, they came up with multicellular things filamentous cyanobacteria for example, they’re just long strings of cells. But the origin of the eukaryotic cell seems to have been what’s called an endosymbiosis so one cell gets inside another cell, and I think that that transformed the energetic possibilities of life. So what we end up with is a kind of supercharged cell, which can have a much larger nucleus with many more genes all supported.

(00:38:54)
You could think about it as multi-bacterial power without the overhead. So you’ve got a cell and it’s got bacteria living in it, and those bacteria are providing it with the energy currency it needs. But each bacterium has a genome of its own, which costs a fair amount of energy to express, to turn over and convert into proteins and so on. What the mitochondria did, which are these power packs in our own cells, they were bacteria once and they threw away virtually all their genes, they’ve only got a few left.
Lex Fridman
(00:39:25)
So mitochondria is, like you said, is the bacteria that got inside a cell and then throw away all this stuff it doesn’t need to survive inside the cell and then kept what?
Nick Lane
(00:39:35)
So what we end up with, so it kept always a handful of genes in our own case, 37 genes, but there’s a few protists which are single-celled things that have got as many as 70 or 80 genes so it is not always the same, but it’s always a small number. And you can think of it as a pared-down power pack where the control unit has really been kind pared down to almost nothing. So it’s putting out the same power, but the investment in the overheads is really pared down, that means that you can support a much larger nuclear genome. So we’ve gone up in the number of genes, but also the amount of power you have to convert those genes into proteins. We’ve gone up about fourfold in the number of genes, but in terms of the size of genomes and your ability to make the building blocks, make the proteins, we’ve gone up a hundred thousand fold or more, so it’s huge step change in the possibilities of evolution.

(00:40:29)
And it is interesting then that the only two occasions that complex life has arisen on earth, plants and animals, fungi you could say are complex as well, but they don’t form such complex morphology as plants and animals, start with a single cell they start with an oocyte and a sperm fused together to make a zygote. So we start development with a single cell and all the cells in the organism have identical DNA, and in the brain, you switch off these genes and you switch on those genes and the liver, you switch off those and you switch on a different set. And the standard evolutionary explanation for that is that you’re restricting conflict, you don’t have a load of genetically different cells that are all fighting each other and so it works.

(00:41:14)
The trouble with bacteria is they form these biofilms and they’re all genetically different, and effectively they’re incapable of that level of cooperation they would get in a fight.
Lex Fridman
(00:41:26)
Okay, so why is this such a difficult invention of getting this bacteria inside and becoming an engine, which the mitochondria is? Why do you assign it such great importance? Is it great importance in terms of the difficulty of how it was to achieve or great importance in terms of the impact it had on life?
Nick Lane
(00:41:46)
Both. It had a huge impact on life because if that had not happened, you can be certain that life on earth would be bacterial only.
Lex Fridman
(00:41:56)
And that took a really long time to-
Nick Lane
(00:41:58)
It took 2 billion years and it hasn’t happened since to the best of our knowledge, so it looks as if it’s genuinely difficult. And if you think about it then from just an informational perspective, you think bacteria, they structure their information differently. So a bacterial cell has a small genome, you might have 4,000 genes in it. But a single E. coli cell has access to about 30,000 genes, potentially. It’s got a metagenome where other E. coli out there have got different gene sets and they can switch them around between themselves. And so you can generate a huge amount of variation, and they’ve got more. An E. coli. metagenome is larger than the human genome, we own 20,000 genes or something, and they’ve had 4 billion years of evolution to work out what can I do and what can’t I do with this metagenome. And the answer is, you’re stuck, you’re still bacteria.

(00:42:54)
So they have explored genetic sequence space far more thoroughly than eukaryotes ever did because they’ve had twice as long at least, and they’ve got much larger populations, and they never got around this problem. So why can’t they? It seems as if you can’t solve it with information alone. So what’s the problem? The problem is structure.

(00:43:16)
If the very first cells needed an electrical charge on their membrane to grow, and in bacteria it’s the outer membrane that surrounds the cell, which is electrically charged, you try and scale that up and you’ve got a fundamental design problem, you’ve got an engineering problem, and there are examples of it. And what we see in all these cases is what’s known as extreme polyploidy, which is to say they have tens of thousands of copies of their complete genome, which is energetically hugely expensive, and you end up with a large bacteria with no further development. What you is to incorporate these electrically charged power pack units inside with their control units intact, and for them not to conflict so much with the host cell that it all goes wrong, perhaps it goes wrong more often than not, and then you change the topology of the cell.

(00:44:10)
Now, you don’t necessarily have any more DNA than a giant bacterium with extreme polyploidy, but what you’ve got is an asymmetry. You now have a giant nuclear genome surrounded by lots of subsidiary energetic genomes they’re the control units that are doing all the control of energy generation.
Lex Fridman
(00:44:32)
Could this have been done gradually or does it have to be done, the power pack has to be all intact and ready to go and working?
Nick Lane
(00:44:40)
I mean, it’s a kind of step changing in the possibilities of evolution, but it doesn’t happen overnight. It’s going to still require multiple, multiple, generations. So it could take millions of years, it could take shorter time there’s another thing I would like to put the number of steps and try and work out what’s required at each step and we are trying to do that with sex, for example. You can’t have a very large genome unless you have sex at that point so what are the changes to go from bacterial recombination to eukaryotic recombination? What do you need to do? Why do we go from passing around bits of DNA as if it’s loose change to fusing cells together, lining up the chromosomes, recombining across the chromosomes, and then going through two rounds of cell division to produce your gametes? All eukaryotes do it that way.

(00:45:24)
So again, why switch? What are the drivers here? So there’s a lot of time, there’s a lot of evolution, but as soon as you’ve got cells living inside another cell, what you’ve got is a new design, you’ve got new potential that you didn’t have before.
Lex Fridman
(00:45:39)
So the cell living inside another cell, that design allows for better storage of information, better use of energy, more delegation, like a hierarchical control of the whole thing. And then somehow that leads to ability to have multi-cell organisms?
Nick Lane
(00:46:00)
I’m not sure that you have hierarchical control necessarily, but you’ve got a system where you can have a much larger information storage depot in the nucleus, you can have a much larger genome. And that allows multi-cellularity, yes, because it allows you… It’s a funny thing, to have an animal where I have 70% of my genes switched on in my brain and a different 50% switched on in my liver or something, you’ve got to have all those genes in the egg cell at the very beginning, and you’ve got to have a program of development which says, “Okay, you guys switch off those genes and switch on those genes, and you guys you do that.” But all the genes are there at the beginning. That means you’ve got to have a lot of genes in one cell and you’ve got to be able to maintain them and the problem with bacteria is they don’t get close to having enough genes in one cell. So if you were to try make a multicellular organism from bacteria, you’d bring different types of bacteria together and hope they’ll cooperate and the reality is they don’t.
Lex Fridman
(00:46:58)
That’s really, really tough to do, combinatorially.
Nick Lane
(00:47:00)
We know they don’t because it doesn’t exist.
Lex Fridman
(00:47:02)
We have the data as far as we know. I’m sure there’s a few special ones and they die off quickly. I’d love to know some of the most fun things bacteria have done since?
Nick Lane
(00:47:12)
Oh, I mean, they can do some pretty funky things. This is broad brushstroke that I’m talking about, but it’s, yeah.

Sex

Lex Fridman
(00:47:19)
Generally speaking. So another fun invention, us humans seem to utilize it well, but you say it’s also very important early on is sex. So what is sex? Just asking for a friend. And when was it invented and how hard was it to invent, just as you were saying, and why was it invented? How hard was it? And when?
Nick Lane
(00:47:45)
I have a PhD student who’s been working on this-
Lex Fridman
(00:47:45)
On sex?
Nick Lane
(00:47:47)
… and we’ve just published a couple of papers. On sex, yes, yes, yes.
Lex Fridman
(00:47:50)
Nice. Where do you publish these? Is it biology, genetics, journals?
Nick Lane
(00:47:55)
This is actually PNAS, which is Proceedings of the National Academy of Sciences.
Lex Fridman
(00:48:00)
So like, broad, big, big picture stuff?
Nick Lane
(00:48:02)
Everyone’s interested in sex.
Lex Fridman
(00:48:03)
Yeah.
Nick Lane
(00:48:04)
The job of biologist is to make sex dull.
Lex Fridman
(00:48:08)
Yeah, that’s a beautiful way to put it. Okay, so when was it invented?
Nick Lane
(00:48:13)
It was invented with eukaryotes about 2 billion years ago. All eukaryotes share the same basic mechanism that you produce gametes, the gametes fuse together. So a gamete is the egg cell and the sperm, they’re not necessarily even different in size or shape. So the simplest eukaryotes produce what are called motile gametes, they’re all like sperm and they all swim around, they find each other, they fuse together, they don’t have much going on there beyond that. And then these are haploids, which is to say we all have two copies of our genome, and the gametes have only a single copy of the genome. So when they fuse together, you now become diploid again, which is to say you now have two copies of your genome, and what you do is you line them all up and then you double everything.

(00:49:01)
So now we have four copies of the complete genome, and then we crisscross between all of these things. So we take a bit from here and stick it on there and a bit from here, and we stick it on here, that’s recombination. Then we go through two rounds of cell division. So we divide in half, so now the two daughter cells have two copies and we divide in half again, now we have some gametes, each of which has got a single copy of the genome. And that’s the basic ground plan for what’s called meiosis and syn-gametes, that’s basically sex.

(00:49:31)
And it happens at the level of single-celled organisms and it happens pretty much the same way in plants and pretty much the same way in animals and so on. And it’s not found in any bacteria, they switch things around using the same machinery and they take up a bit of DNA from the environment. They take out this bit and stick in that bit, and it’s the same molecular machinery they’re using to do it.
Lex Fridman
(00:49:50)
So what about the, you said find each other this kind of imperative to find each other. What is that?
Nick Lane
(00:49:58)
Well, you’ve got a fuse cells together. So the bottom line on all of this is bacteria, I mean, it’s kind of simple when you’ve figured it out and figuring it out this is not me, this is my PhD student, Marco Colnaghi. And in effect, if you’re doing lateral, you’re E. coli cell, you’ve got 4,000 genes, you want to scale up to a eukaryotic size. I want to have 20,000 genes and I need to maintain my genome so it doesn’t get shot to pieces by mutations, and I’m going to do it by lateral gene transfer.

(00:50:32)
So I know I’ve got a mutation in a gene, I don’t know which gene it is because I’m not sentient, but I know I can’t grow, I know all my regulation systems are saying, “Something wrong here, something wrong, pick up some DNA, pick up a bit of DNA from the environment.” If you’ve got a small genome, the chances of you picking up the right bit of DNA from the environment is much higher than if you’ve got a genome of 20,000 genes. To do that, you’ve effectively got to be picking up DNA all the time, all day long and nothing else, and you’re still going to get the wrong DNA. You’ve got to pick up large chunks, and in the end, you’ve got to line them up, you’re forced into sex, to coin a phrase.
Lex Fridman
(00:51:10)
So there is a kind of incentive-
Nick Lane
(00:51:18)
If you want to have a large genome, you’ve got to prevent it mutating to nothing and that will happen with bacteria, so there’s another reason why bacteria can’t have a large genome. But as soon as you give eukaryotic cells the power pack that allows them to increase the size of their genome, then you face the pressure that you’ve got to maintain its quality. You’ve got to stop it just mutating away.
Lex Fridman
(00:51:38)
What about sexual selection? So the finding like, “I don’t like this one. I don’t like this one. This one seems all right.” At which point does it become less random?
Nick Lane
(00:51:52)
It’s hard to know.
Lex Fridman
(00:51:54)
Because eukaryotes just kind of float around just kind of have… It’s kind of like Tinder these days.
Nick Lane
(00:51:59)
Yeah I mean, it’s their sexual section election in single-celled eukaryotes. There probably is, it’s just that I don’t know very much about it. By the time we-
Lex Fridman
(00:51:59)
You don’t hang out with eukaryotes?
Nick Lane
(00:52:06)
Well, I do all the time, but you know?
Lex Fridman
(00:52:07)
You can’t communicate with them, yeah.
Nick Lane
(00:52:08)
Yeah. Peacock or something.
Lex Fridman
(00:52:11)
Yes.
Nick Lane
(00:52:13)
The kind of standard, this is not quite what I work on, but the standard answer is that it’s female mate choice, she’s looking for good genes and if you can have a tail that’s like this and still survive, still be alive, not actually have been taken down by the nearest predator, then you must’ve got pretty good genes despite this handicap you are able to survive.
Lex Fridman
(00:52:36)
So those are human interpretable things like with a peacock. But I wonder, I’m sure echoes of the same thing are there with more primitive organisms, basically your PR, like how you advertise yourself that you’re worthy of? Yeah,
Nick Lane
(00:52:54)
Absolutely.
Lex Fridman
(00:52:54)
So one big advertisement is the fact that you survived it all.
Nick Lane
(00:52:57)
Yeah, let me give you one beautiful example of an algal bloom, and this can be a sign of bacteria, this can be in bacteria. So if suddenly you pump nitrate or phosphate or something into the ocean and everything goes green, you end up with all this algae growing there, a viral infection or something like that can kill the entire bloom overnight. And it’s not that the virus takes out everything overnight, it’s that most of the cells in that bloom kill themselves before the virus can get onto them. And it’s through a form of cell death called programmed cell death. And we do the same thing, this is how we have the gaps between our fingers and so on, it’s how we craft synapses in the brain. It is fundamental again, to multicellular life.

(00:53:47)
They have the same machinery in these algal blooms. How do they know who dies? The answer is they will often put out a toxin and that toxin is a kind of challenge to you. Either you can cope with the toxin or you can’t. If you can cope with it, you form a spore and you will go on to become the next generation. You form kind of a resistant spore, you sink down a little bit, you get out of the way, you can’t be attacked by a virus if you’re a spore or at least not so easily. Whereas if you can’t deal with that toxin, you pull the plug and you trigger your death apparatus and you kill yourself.
Lex Fridman
(00:54:27)
It’s truly life and death selection.
Nick Lane
(00:54:29)
Yeah, so it’s a challenge, and this is a bit like sexual selection. They’re all pretty much genetically identical, but they’ve had different life histories. So have you had a tough day? Did you happen to get infected by this virus? Or did you run out of iron? Or did you get a bit too much sun? Whatever it may be. If this extra stress of the toxin just pushes you over the edge, then you have this binary choice, either you’re the next generation or you kill yourself now using this same machinery.

DNA

Lex Fridman
(00:54:57)
It’s also actually exactly the way I approach dating, but that’s probably why I am single. Okay. What about, if we can step back, DNA just mechanism of storing information, RNA, DNA, how big of an invention was that? That seems to be fundamental to something deep within what life is, is the ability, as you said, to kind of store and propagate information. But then you also kind of inferred that with you and your students’ work, that there’s a deep connection between the chemistry and the ability to have this kind of genetic information. So how big of an invention is it to have a nice representation, a nice hard drive for info to pass on?
Nick Lane
(00:55:46)
Huge, I suspect. I mean, but when I was talking about the code, you see the code in RNA as well, and RNA almost certainly came first. And there’s been an idea going back decades called the RNA world because RNA in theory can copy itself and can catalyze reactions. So it kind of cuts out this chicken and egg loop.
Lex Fridman
(00:56:07)
The DNA, it’s possible is not that special?
Nick Lane
(00:56:09)
So RNA is the thing that does the work really, and the code lies in RNA. The code lies in the interactions between RNA and amino acids and it still is there today in the ribosome, for example, which is just kind of a giant ribozyme, which is to say it’s an enzyme that’s made of RNA.

(00:56:28)
So getting to RNA, I suspect is probably not that hard. But getting from RNA, there’s multiple different types of RNA now, how do you distinguish? This is something we’re actively thinking about, how do you distinguish between a random population of RNA? Some of them go on to become messenger RNA, this is the transcript of the code of the gene that you want to make. Some of them become transfer RNA, which is kind of the unit that holds the amino acid that’s going to be polymerized. Some of them become ribosomal RNA, which is the machine, which is joining them all up together.

(00:57:07)
How do they discriminate themselves? There’s some kind of phase transition going on there, I don’t know, it’s a difficult question and we’re now in the region of biology where information is coming in. But the thing about RNA is very, very good at what it does but the largest genomes supported by RNA are RNA viruses like HIV, for example. They’re pretty small. And so there’s a limit to how complex life could be unless you come up with DNA, which chemically is a really small change but how easy it is to make that change? I don’t really know. As soon as you’ve got DNA, then you’ve got an amazingly stable molecule for information storage, and you can do absolutely anything. But how likely that transition from RNA to DNA was? I don’t know either.
Lex Fridman
(00:57:54)
How much possibility is there for variety in ways to store information? It seems to be very, there’s specific characteristics about the programming language of DNA.
Nick Lane
(00:58:06)
Yeah, there’s a lot of work going on on what’s called the Xeno DNA or RNA. Can we replace the bases themselves, the letters if you like, in RNA or DNA? Can we replace the backbone? Can we replace, for example, phosphate with arsenate? Can we replace the sugar ribose or deoxyribose with a different sugar? And the answer is yes, you can within limits there’s not an infinite space there. Arsenate doesn’t really work if the bonds are not as strong as phosphate, it’s probably quite hard to replace phosphate. It’s possible to do it.

(00:58:43)
The question to me is, why is it this way? Is it because there was some form of selection that this is better than the other forms and there were lots of competing forms of information storage early on, and this one was the one that worked out? Or was it kind of channeled that way, that these are the molecules that you’re dealing with and they work? And I’m increasingly thinking it’s that way that we’re channeled towards ribose phosphate and the bases that are used, but there are 200 different letters kicking around out there that could have been used.
Lex Fridman
(00:59:17)
It’s such an interesting question. If you look at, in the programming world in computer science, there’s a programming language called JavaScript, which was written super quickly, it’s a giant mess, but it took over the world.
Nick Lane
(00:59:30)
Sounds very biological.
Lex Fridman
(00:59:31)
It was kind of a running joke that surely this can’t be… This is a terrible programming language, it’s a giant mess. It’s full of bugs, it’s so easy to write really crappy code but it took over all of front end development in the web browser. If you have any kind of dynamic interactive website, it’s usually running JavaScript and it’s now taking over much of the backend, which is the serious heavy duty computational stuff. And it’s become super fast with the different compilation engines that are running it, so it really took over the world. It’s very possible that this initially crappy derided language actually takes everything over.

(01:00:14)
And then the question is, did human civilization always strive towards JavaScript or was JavaScript just the first programming language that ran on the browser and still sticky? The first is the sticky one, and so it wins over anything else because it was first. And I don’t think that’s answerable, right? But it’s good to ask that. I suppose in the lab you can’t run it with programming languages, but in biology you can probably do some kind of small scale evolutionary test to try to infer which is which?
Nick Lane
(01:00:54)
Yeah, I mean, in a way, we’ve got the hardware and the software here, and the hardware is maybe the DNA and the RNA itself, and then the software perhaps is more about the code. Did the code have to be this way? Could it have been a different way? And people talk about the optimization of the code, and there’s some suggestion for that. I think it’s weak, actually. But you could imagine you can come out with a million different codes and this would be one of the best ones.
Lex Fridman
(01:01:22)
Well, we don’t know this. We don’t know this.
Nick Lane
(01:01:25)
People have tried to model it based on the effect that mutations would have. So no, you’re right, we don’t know because that’s a single assumption that a mutation is what’s being selected on there and there’s other possibilities too.
Lex Fridman
(01:01:39)
I mean, there does seem to be a resilience and a redundancy to the whole thing.
Nick Lane
(01:01:43)
Yep.
Lex Fridman
(01:01:43)
It’s hard to mess up in the way you mess it up often is likely to produce interesting results.
Nick Lane
(01:01:52)
Are you talking about JavaScript or the genetic code now?
Lex Fridman
(01:01:54)
Both.
Nick Lane
(01:01:55)
Yeah? Well, I mean, it’s almost, biology is underpinned by this kind of mess as well. And you look at the human genome and it is full of stuff that is really either broken or dysfunctional or was a virus once or whatever it may be, and somehow it works and maybe we need a lot of this mess. We know that some functional genes are taken from this mess.

Violence

Lex Fridman
(01:02:15)
So what about, you mentioned predatory behavior.
Nick Lane
(01:02:19)
Yeah.
Lex Fridman
(01:02:20)
We talked about sex. What about violence? Predator and prey dynamics? When was that invented? And poetic and biological ways of putting it, how do you describe predator prey relationship? Is it a beautiful dance or is it a violent atrocity?
Nick Lane
(01:02:43)
Well, I guess it’s both, isn’t it? I mean, when does it start? It starts in bacteria, you see these amazing predators Bdellovibrio is one that Lynn Margulis used to talk about a lot. It’s got a kind of a drill piece that drills through the wall and the membrane of the bacterium, and then it effectively eats the bacterium from just inside the periplasmic space and makes copies of itself that way, so that’s straight predation. There are predators among bacteria.
Lex Fridman
(01:03:08)
So predation in that, sorry to interrupt, means you murder somebody and use their body as a resource in some way?
Nick Lane
(01:03:17)
Yeah.
Lex Fridman
(01:03:18)
But it’s not parasitic in that you need them to be still alive?
Nick Lane
(01:03:23)
No, no. I mean, predation is you kill them really.
Lex Fridman
(01:03:26)
Murder.
Nick Lane
(01:03:27)
Parasitis, you kind of live on them.
Lex Fridman
(01:03:30)
Okay. But it seems the predator is the really popular tool?
Nick Lane
(01:03:35)
So what we see, if we go back 560, 570 million years before the Cambrian Explosion, there is what’s known as the Ediacaran Fauna, or sometimes they call Vendobionts, which is a lovely name and it’s not obvious that they’re animals at all. They’re stalked things, they often have fronds that look a lot like leaves with kind of fractual branching patterns on them and-
Nick Lane
(01:04:00)
… branching patterns on them. And the thing is they’re found, sometimes, geologists can figure out the environment that they were in and say, “This is more than 200 meters deep because there’s no sign of any waves. There’s no storm damage down here,” this kind of thing. They were more than 200 meters deep, so they’re definitely not photosynthetic. These are animals, and they’re filter feeders. We know sponges and corals and things are filter-feeding animals; they’re stuck to the spot. And little bits of carbon that come their way, they filter it out, and that’s what they’re eating. So no predation involved in this, beyond stuff just dies anyway, and it feels like a very gentle, rather beautiful, rather limited world, you might say. There’s not a lot going on there.

(01:04:49)
And something changes. Oxygen definitely changes during this period. Other things may have changed as well. But the next thing you really see in the fossil record is the Cambrian explosion. And what do we see there? We’re now seeing animals that we would recognize, they’ve got eyes, they’ve got claws, they’ve got shells. They’re plainly killing things or running away and hiding. So we’ve gone from a rather gentle, but limited world, to a rather vicious, unpleasant world that we recognize, which leads to kind of arms races, evolutionary arms races, which again is something that when we think about a nuclear arms race, we think, “Jesus, we don’t want to go there. It’s not done anybody any good.” In some ways, maybe it does do good. I don’t want to make an argument for nuclear arms, but predation as a mechanism forces organisms to adapt, to change, to be better, to escape, or to kill. If you need to eat, then you’ve got to eat. A cheetah is not going to run at that speed unless it has to because the zebra is capable of escaping. So it leads to much greater feats of evolution would ever have been possible without it, and in the end, to a much more beautiful world. So it’s not all bad, by any means.

(01:06:17)
But the thing is, you can’t have this if you don’t have an oxygenated planet because it’s all, in the end, it’s about how much energy can you extract from the food you eat? And if you don’t have an oxygenated planet, you can get about 10% out, not much more than that. And if you’ve got an oxygenated planet, you can get about 40% out. And that means you can have, instead of having one or two trophic levels, you can have five or six trophic levels, and that means things can eat things that eat other things and so on, and you’ve gone to a level of ecological complexity, which is completely impossible in the absence of oxygen.
Lex Fridman
(01:06:51)
This reminds me of the Hunter S. Thompson quote that, “For every moment of triumph, for every instance of beauty, many souls must be trampled.” The history of life on Earth unfortunately is that of violence, just the trillions and trillions of multi-cell organisms that were murdered in the struggle for survival.
Nick Lane
(01:07:17)
It’s a sorry statement, but yes, it’s basically true.
Lex Fridman
(01:07:20)
And that somehow is a catalyst from an evolutionary perspective for creativity, for creating more and more complex organisms that are better and better at surviving-
Nick Lane
(01:07:30)
Survival of the fittest, if you just go back to that old phrase, means death of the weakest. Now, what’s fit? What’s weak? These are terms that don’t have much intrinsic meaning, but the thing is, evolution only happens because of death.
Lex Fridman
(01:07:45)
One way to die is that the constraints, the scarcity of the resources in the environment, but that seems to be not nearly as good of a mechanism for death than other creatures roaming about in the environment. When I say environment, I mean the static environment, but then there’s the dynamic environment of bigger things trying to eat you and use you for your energy.
Nick Lane
(01:08:10)
It forces you to come up with a solution to your specific problem that is inventive and is new and hasn’t been done before. So it forces literally change, literally evolution on populations. They have to become different.
Lex Fridman
(01:08:27)
And it’s interesting that humans have channeled that into more… I guess what humans are doing is they’re inventing more productive and safe ways of doing that. This whole idea of morality and all those kinds of things, I think they ultimately lead to competition versus violence. Because I think violence can have a cold, brutal, inefficient aspect to it, but if you channel that into more controlled competition in the space of ideas, in the space of approaches to life, maybe you can be even more productive than evolution is. Because evolution is very wasteful. The amount of murder required to really test the good idea, genetically speaking, is just a lot. Many, many, many generations.
Nick Lane
(01:09:21)
Morally, we cannot base society on the way that evolution works.
Lex Fridman
(01:09:26)
But that’s an invention, right, to morality?
Nick Lane
(01:09:27)
But actually, in some respects, we do, which is to say, “This is how science works. We have competing hypotheses that have to get better, otherwise they die.” It’s the way that society works. In Ancient Greece, we had Athens and Sparta and city states, and then we had the Renaissance and nation states, and universities compete with each other tremendous amount, companies competing with each other all the time. It drives innovation. And if we want to do it without all the death that we see in nature, then we have to have some kind of societal-level control that says, “Well, there’s some limits, guys, and these are what the limits are going to be,” and society as a whole has to say, “Right, we want to limit the amount of death here, so you can’t do this and you can’t do that.” Who makes up these rules, and how do we know? It’s a tough thing, but it’s basically trying to find a moral basis for avoiding the death of evolution and natural selection and keeping the innovation and the richness of it.
Lex Fridman
(01:10:27)
I forgot who said it, but that murder is illegal… Probably Kurt Vonnegut. Murder is illegal except when it’s done to the sound of trumpets and at a large scale. So we still have wars, but we are struggling with this idea that murder is a bad thing. It’s so interesting how we’re channeling the best of the evolutionary imperative and trying to get rid of the stuff that’s not productive, trying to almost accelerate evolution. The same kind of thing that makes evolution creative, we’re trying to use that.
Nick Lane
(01:11:07)
I think we naturally do it. I don’t think we can help ourselves to it.
Lex Fridman
(01:11:11)
It’s so hard to know.
Nick Lane
(01:11:12)
Capitalism as a form is basically about competition and differential rewards. But society, and we have a, I keep using this word, moral obligation, but we cannot operate as a society if we go that way. It’s interesting that we’ve had problems achieving balance. For example, in the financial crash in 2009, do you let banks go to the wall or not, this kind of question. In evolution, certainly, you let them go to the wall. And in that sense, you don’t need the regulation because they just die. Whereas if we as a society think about what’s required for society as a whole, then you don’t necessarily let them go to the wall, in which case you then have to impose some kind of regulation that the bankers themselves will, in an evolutionary manner, exploit.
Lex Fridman
(01:12:08)
Yeah, we’ve been struggling with this kind of idea of capitalism, the cold brutality of capitalism that seems to create so much beautiful things in this world, and then the ideals of communism that seem to create so much brutal destruction in history. We struggle with ideas of, “Well, maybe we didn’t do it right. How can we do things better,” and then the ideas are the things we’re playing with, as opposed to people. If a PhD student has a bad idea, we don’t shoot the PhD student. We just criticize their idea and hope they improve.
Nick Lane
(01:12:42)
You have a very humane [inaudible 01:12:43].

Human evolution

Lex Fridman
(01:12:44)
Yeah. Yeah. I don’t know how you guys do it. The way I run things, it’s always life and death. Okay. So it is interesting about humans that there is an inner sense of morality, which begs the question of, how did homo sapiens evolve? If we think about the early invention of sex and early invention of predation, what was the thing invented to make humans? What would you say?
Nick Lane
(01:13:17)
I suppose a couple of things I’d say. Number one is you don’t have to wind the clock back very far, five, six million years or so, and let it run forwards again, and the chances of humans as we know them is not necessarily that high. Imagine as an alien, you find planet Earth, and it’s got everything apart from humans on it. It’s an amazing, wonderful, marvelous planet, but nothing that we would recognize as extremely intelligent life, space-faring civilization. So when we think about aliens, we’re kind of after something like ourselves or after a space-faring civilization. We’re not after zebras and giraffes and lions and things, amazing though they are. But the additional kind of evolutionary steps to go from large, complex mammals, monkeys, let’s say, to humans doesn’t strike me as that long a distance. It’s all about the brain. And where’s the brain and morality coming from? It seems to me to be all about groups, human groups and interactions between groups.
Lex Fridman
(01:14:22)
The collective intelligence of it.
Nick Lane
(01:14:24)
Yes.
Lex Fridman
(01:14:24)
Yeah.
Nick Lane
(01:14:25)
The interactions, really. And there’s a guy at UCL called Mark Thomas, who’s done a lot of really beautiful work, I think, on this kind of question. I talk to him every now and then, so my views are influenced by him. But a lot seems to depend on population density. The more interactions you have going on between different groups, the more transfer of information, if you like, between groups, of people moving from one group to another group, almost like lateral gene transfer in bacteria. The more expertise you’re able to develop and maintain, the more culturally complex your society can become. And groups that have become detached, like on Easter Island, for example, very often degenerate in terms of the complexity of their civilization.
Lex Fridman
(01:15:13)
Is that true for complex organisms in general, population density-
Nick Lane
(01:15:19)
Really matters.
Lex Fridman
(01:15:19)
… is often productive?
Nick Lane
(01:15:19)
Really matters. But in human terms, I don’t know what the actual factors were that were driving a large brain, but you can talk about fire, you can talk about tool use, you can talk about language, and none of them seem to correlate especially well with the actual known trajectory of human evolution in terms of cave art and these kind of things. That seems to work much better just with population density in number of interactions between different groups, all of which is really about human interactions, human-human interactions, and the complexity of those.
Lex Fridman
(01:15:58)
But population density is the thing that increases the number of interactions, but then there must have been inventions forced by that number of interactions that actually led to humans. So Richard Wrangham talks about that it’s basically the beta males had to beat up the alpha male, so that’s what collaboration looks like is when you’re living together, they don’t like, our early ancestors, don’t like the dictatorial aspect of a single individual at the top of a tribe, so they learn to collaborate how to basically create a democracy of sorts, a democracy that prevents, minimizes, or lessens the amount of violence, which essentially gives strength to the tribe and make the war between tribes versus the dictator [inaudible 01:16:55]-
Nick Lane
(01:16:55)
I think one of the most wonderful things about humans is we’re all of those things. We are deeply social as a species, and we’re also deeply selfish. And it seems to me the conflict between capitalism and communism is really just two aspects of human nature, both of which are-
Lex Fridman
(01:17:11)
We’ve got both.
Nick Lane
(01:17:11)
We have both. And we have a constant kind of vying between the two sides. We really do care about other people, beyond our families, beyond our immediate people. We care about society and the society that we live in. And you could say that’s a drawing towards socialism or communism. On the other side, we really do care about ourselves. We really do care about our families, about working for something that we gain from, and that’s the capitalist side of it. They’re both really deeply ingrained in human nature.

(01:17:38)
In terms of violence and interactions between groups, yes, all this dynamic of if you’re interacting between groups, you can be certain that they’re going to be burning each other and all kinds of physical, violent interactions as well, which will drive the kind of cleverness of, how do you resist this? Let’s build a tower. What are we going to do to prevent being overrun by those marauding gangs from over there? And you look outside humans, and you look at chimps and bonobos and so on, and they’re very, very different structures to society. Chimps tend to have an aggressive alpha male-type structure, and bonobos, there’s basically a female society, where the males are predominantly excluded and only brought in at the behest of the female. We have a lot in common with both of those groups.
Lex Fridman
(01:18:29)
And there’s, again, tension there. Probably chimps, more violence, the bonobos, probably more sex. That’s another tension. How serious do we want to be? How much fun we want to be?

Neanderthals


(01:18:44)
Asking for a friend again, what do you think happened to Neanderthals? What did we cheeky humans do to the Neanderthals, homo sapiens? Do you think we murdered them? How do we murder them? How do we out-compete them, or do we out-mate them?
Nick Lane
(01:19:01)
I don’t know. I think there’s unequivocal evidence that we mated with them.
Lex Fridman
(01:19:06)
Yeah. We always try to mate with everything.
Nick Lane
(01:19:07)
Yes, pretty much. There’s some interesting… The first sequences that came along were in mitochondrial DNA, and that was back to about 2002 or thereabouts. And what was found was that Neanderthal mitochondrial DNA was very different to human mitochondrial DNA-
Lex Fridman
(01:19:23)
Oh, that’s so interesting.
Nick Lane
(01:19:24)
And you could do a clock on it, and it said the divergent state was about 600,000 years ago or something like that, so not so long ago. And then the first full genomes were sequenced maybe 10 years after that, and they showed plenty of signs of mating between. So the mitochondrial DNA effectively says no mating, and the nuclear genes say, yeah, lots of mating, but we don’t know-
Lex Fridman
(01:19:48)
How is that possible? Sorry, can you explain the difference between mitochondrial DNA-
Nick Lane
(01:19:51)
Sorry, yes.
Lex Fridman
(01:19:53)
… and nucleus?
Nick Lane
(01:19:53)
I’ve talked before about the mitochondria, which are the power packs in cells. These are the pared-down control units is their DNA. It’s passed on by the mother only. And in the egg cell, we might have half a million copies of mitochondrial DNA. There’s only 37 genes left. And it’s basically the control unit of energy production. That’s what it’s doing.
Lex Fridman
(01:20:18)
It’s a basic, old-school machine that does energy production.
Nick Lane
(01:20:21)
It’s got genes that were considered to be effectively trivial because they did a very narrowly defined job, but they’re not trivial in the sense that that narrowly defined job is about everything that is being alive. So they’re much easier to sequence. You’ve got many more copies of these things, and you can sequence them very quickly.

(01:20:42)
But the problem is, because they go down only the maternal line, from mother to daughter, your mitochondrial DNA and mine, it’s going nowhere. It doesn’t matter. Any kids we have, they get their mother’s mitochondrial DNA, except in very, very rare and strange circumstances. So it tells a different story, and it’s not a story which is easy to reconcile always. And what it seems to suggest, to my mind at least, is that there was one-way traffic of genes probably going from humans into Neanderthals rather than the other way around.

(01:21:18)
Why did the Neanderthals disappear? I don’t know. I suspect they were probably less violent, less clever, less populous, less willing to fight. I don’t know. I think we probably drove them to extinction at the margins of Europe.
Lex Fridman
(01:21:37)
And it’s interesting how much, if we ran Earth over and over again, how many of these branches of intelligent beings that have figured out how to leverage collective intelligence, which ones of them emerge, which ones of them succeed? Is it the more violent ones? Is it the more isolated one? What dynamics result to more productivity? And I suppose we’ll never know. The more complex the organism, the harder it is to run the experiment in the lab.
Nick Lane
(01:22:10)
Yes. And in some respects, maybe it’s best if we don’t know.

Sensory inputs

Lex Fridman
(01:22:15)
Yeah. The truth might be very painful. What about, if we actually step back, a couple of interesting things that we humans do? One is object manipulation and movement, and of course, movement was something that was done… That was another big invention, being able to move around the environment. And the other one is this sensory mechanism, how we sense the environment. One of the coolest high-definition ones is vision. How big are those inventions in the history of life on Earth?
Nick Lane
(01:22:50)
Vision, movement, again, extremely important going back to the origin of animals, the Cambrian explosion, where suddenly you’re seeing eyes in the fossil record. And it’s not necessarily… Again, lots of people historically have said, “What use is half an eye,” and you can go in a series of steps from a light-sensitive spot on a flat piece of tissue to an eyeball with a lens and so on if you assume no more than… I don’t remember. This was a specific model that I have in mind, but it was 1% change or half a percent change for each generation how long would it take to evolve an eye as we know it, and the answer is half a million years. It doesn’t have to take long. That’s not how evolution works. That’s not an answer to the question. It just shows you can reconstruct the steps and you can work out roughly how it can work.

(01:23:44)
So it’s not that big a deal to evolve an eye. But once you have one, then there’s nowhere to hide. Again, we’re back to predator-prey relationships. We’re back to all the benefits that being able to see brings you. And if you think philosophically what bats are doing with ecolocation and so on, I have no idea, but I suspect that they form an image of the world in pretty much the same way that we do. It’s just a matter of mental reconstruction.

(01:24:10)
So I suppose the other thing about sight, there are single-celled organisms that have got a lens and a retina and a cornea and so on. Basically they’ve got a camera-type eye in a single cell. They don’t have a brain; what they understand about their world is impossible to say, but they’re capable of coming up with the same structures to do so. So I suppose then, is that once you’ve got things like eyes, then you have a big driving pressure on the central nervous system to figure out what it all means.

(01:24:44)
And then we come around to your other point about manipulation, sensory input, and so on about now you have a huge requirement to understand what your environment is and what it means and how it reacts and how you should run away and where you should stay put.
Lex Fridman
(01:24:59)
Actually on that point, let me… I don’t know if you know the work of Donald Hoffman, who uses the argument, the mechanism of evolution, to say that there’s not necessarily a strong evolutionary value to seeing the world as it is, so objective reality, that our perception actually is very different from what’s objectively real. We’re living inside an illusion and we’re basically… The entire set of species on Earth, I think, I guess, are competing in a space that’s an illusion that’s distinct from, that’s far away from physical reality as defined by physics.
Nick Lane
(01:25:46)
I’m not sure it’s an illusion so much as a bubble. We have a sensory input, which is a fraction of what we could have a sensory input on, and we interpret it in terms of what’s useful for us to know to stay alive. So, yes, it’s an illusion in that sense, but-
Lex Fridman
(01:26:00)
So it’s a subset-
Nick Lane
(01:26:02)
… a tree is physically there, and if you walk into that tree, it’s not purely a delusion. There’s some physical reality to it.
Lex Fridman
(01:26:10)
So it’s a sensory slice into reality as it is, but because it’s just a slice, you’re missing a big picture. But he says that that slice doesn’t necessarily need to be a slice. It could be a complete fabrication that’s just consistent amongst the species, which is an interesting, or at least it’s a humbling realization that our perception is limited and our cognitive abilities are limited. And at least to me, his argument from evolution, I don’t know how strong that is as an argument, but I do think that life can exist in the mind.
Nick Lane
(01:26:55)
Yes.
Lex Fridman
(01:26:56)
In the same way that you can do a virtual reality video game and you can have a vibrant life inside that place, and that place is not real in some sense, but you could still have a vibe… All the same forces of evolution, all the same competition, the dynamics between humans you can have, but I don’t know if there’s evidence for that being the thing that happened on Earth. It seems that Earth-
Nick Lane
(01:27:25)
I think in either environment, I wouldn’t deny that you could have exactly the world that you talk about, and it would be very difficult to… the idea in Matrix movies and so on, that the whole world is completely a construction, and we’re fundamentally deluded. It’s difficult to say that’s impossible or couldn’t happen, and certainly we construct in our minds what the outside world is. But we do it on input, and that input, I would hesitate to say it’s not real because it’s precisely how we do understand the world. We have eyes, but if you keep someone, and apparently this kind of thing happens, someone kept in a dark room for five years or something like that, they never see properly again because the neural wiring that underpins how we interpret vision never developed.

(01:28:19)
When you watch a child develop, it walks into a table. It bangs its head on the table and it hurts. Now you’ve got two inputs. You’ve got one pain from this sharp edge, and number two, probably you’ve touched it and realized it’s there, it’s a sharp edge, and you’ve got the visual input. And you put the three things together and think, “I don’t want to walk into a table again.” So you’re learning, and it’s a limited reality, but it’s a true reality. And if you don’t learn that properly, then you will get eaten, you will get hit by a bus, you will not survive. And same if you’re in some kind of, let’s say, computer construction of reality. I’m not in my ground here, but if you construct the laws that this is what reality is inside this, then you play by those laws.
Lex Fridman
(01:29:05)
Yeah. Well, as long as the laws are consistent. So just like you said in the lab, the interesting thing about the simulation question, yes, it’s hard to know if we’re living inside a simulation, but also, yes, it’s possible to do these kinds of experiments in the lab now more and more. To me, the interesting question is, how realistic does a virtual reality game need to be for us to not be able to tell the difference? A more interesting question to me is, how realistic or interesting does the virtual reality world need to be in order for us to want to stay there forever or much longer than physical reality, prefer that place, and also prefer it not as we prefer hard drugs, but prefer it in a deep, meaningful way in the way we enjoy life itself?
Nick Lane
(01:29:59)
I suppose the issue with the matrix, I imagine that it’s possible to delude the mind sufficiently that you genuinely in that way do think that you are interacting with the real world, when in fact, the whole thing’s a simulation. How good does a simulation need to be able to do that? Well, it needs to convince you that all your sensory input is correct and accurate and joins up and make sense. Now, that sensory input is not something that we’re born with. We’re born with a sense of touch. We’re born with eyes and so on, but we don’t know how to use them. We don’t know what to make of them. We go around, we bump into trees. We cry a lot. We’re in pain a lot. We’re basically booting up the system so that it can make head or tail of the sensory input that it’s getting. And that sensory input’s not just a one-way flux of things. It’s also you have to walk into things. You have to hear things. You have to put it together.

(01:30:53)
Now, if you’ve got just babies in the matrix who are slotted into this, I don’t think they have that kind of sensory input. I don’t think they would have any way to make sense of New York as a world that they’re part of. The brain is just not developed in that way.
Lex Fridman
(01:31:10)
Well, I can’t make sense of New York in this physical reality either. But yeah, but you said pain and the walking into things. Well, you can create a pain signal, and as long as it’s consistent that certain things result in pain, you can start to construct a reality. Maybe you disagree with this, but I think we are born almost with a desire to be convinced by our reality, like a desire to make sense of our reality.
Nick Lane
(01:31:39)
Oh, I’m sure we are, yes.
Lex Fridman
(01:31:40)
So there’s an imperative… So whatever that reality is given to us, like the table hurts, fire is hot, I think we want to be deluded in the sense that we want to make a simple… Einstein’s simple theory of the thing around us, we want that simplicity. So maybe the hunger for the simplicity is the thing that could be used to construct a pretty dumb simulation that tricks us. So maybe tricking humans doesn’t require building a universe.
Nick Lane
(01:32:11)
No, this is not what I work on, so I don’t know how close to it we are-
Lex Fridman
(01:32:16)
I don’t think anyone works on this.
Nick Lane
(01:32:16)
But I agree with you-
Lex Fridman
(01:32:16)
Mark Zuckerberg.
Nick Lane
(01:32:18)
Yeah, I’m not sure that it’s a morally justifiable thing to do, but is it possible in principle? I think it’d be very difficult, but I don’t see why in principle it wouldn’t be possible. And I agree with you that we try to understand the world, we try to integrate the sensory inputs that we have, and we try to come up with a hypothesis that explains what’s going on. I think, though, that we have huge input from the social context that we’re in. We don’t do it by ourselves. We don’t kind of blunder around in a universe by ourself and understand the whole thing. We’re told by the people around us what things are and what they do, and the languages coming in here and so on. So it would have to be an extremely impressive simulation to simulate all of that.

Consciousness

Lex Fridman
(01:33:08)
Yeah. Simulate all of that, including the social construct, the spread of ideas and the exchange of ideas. I don’t know. But those questions are really important to understand as we become more and more digital creatures. It seems like the next step of evolution is us becoming partial… All the same mechanisms we’ve talked about are becoming more and more plugged in into the machine. We’re becoming cyborgs. And there’s an interesting interplay between wires and biology, zeroes and ones and the biological systems, and I don’t think we’ll have the luxury to see humans as disjoint from the technology we’ve created for much longer. We are, in organisms, that’s [inaudible 01:33:56].
Nick Lane
(01:33:56)
Yeah. I agree with you, but we come really with this to consciousness, and is there a distinction there? Because what you are saying, the natural end point says we are indistinguishable, that if you are capable of building an AI, which is sufficiently close and similar, that we merge with it, then to all intents and purposes, that AI is conscious as we know it. And I don’t have a strong view, but I have a view, and I wrote about it in the epilogue to my last book.

(01:34:37)
Because 10 years ago I wrote a chapter in a book called Life Ascending about consciousness. And the subtitle of Life Ascending was The Ten Great Inventions of Evolution, and I couldn’t possibly write a book with a subtitle like that that did not include consciousness, and specifically consciousness as one of the great inventions. And it was in part because I was just curious to know more and I read more for that chapter. I never worked on it, but I’ve always… How can anyone not be interested in the question?

(01:35:09)
And I was left with the feeling that, A, nobody knows, and B, there are two main schools of thought out there with a big kind of skew in distribution. One of them says, oh, it’s a property of matter. It’s an unknown law of physics. Panpsychism, everything is conscious. The sun is conscious. It’s just a matter… A rock is conscious. It’s just a matter of how much. And I find that very unpersuasive. I can’t say that it’s wrong. It’s just that I think we somehow can tell the difference between something that’s living and something that’s not. And then the other end is it’s an emergent property of a very complex, central nervous system. I never quite understand what people mean by words like emergence. There are genuine examples, but I think we very often tend to-
Nick Lane
(01:36:00)
…and examples, but I think we very often tend to use it to plaster over ignorance. As a biochemist. The question for me then was, okay, so it’s a concoction of a central nervous system. A depolarizing neuron gives rise to a feeling, to a feeling of pain or to a feeling of love or anger, or whatever it may be. So what is then a feeling in biophysical terms in the central nervous system, which bit of the wiring gives rise to, and I’ve never seen anyone answer that question in a way that makes sense to me.
Lex Fridman
(01:36:41)
And that’s an important question to answer.
Nick Lane
(01:36:43)
I think if we want to understand consciousness, that’s the only question to answer because certainly an AI is capable of out-thinking and it is only a matter of time. Maybe it’s already happened in terms of just information processing and computational skill. I don’t think we have any problem in designing a mind, which is at least the equal of the human mind. But in terms of what we value the most as humans, which is to say our feelings, our emotions, our sense of what the world is in a very personal way that I think means as much or more to people than their information processing. And that’s where I don’t think that AI necessarily will become conscious because I think it’s the property of life.
Lex Fridman
(01:37:33)
Well, let’s talk about it more. You’re an incredible writer, one of my favorite writers. So let me read from your latest book, Transformer is what you write about consciousness. “‘I think therefore I am,’ said Descartes is one of the most celebrated lines ever written. But what am I, exactly? And artificial intelligence can think too by definition and therefore is yet few of us could agree whether AI is capable in principle of anything resembling human emotions, of love or hate, fear and joy, of spiritual yearning, for oneness or oblivion, or corporeal pangs of thirst and hunger. The problem is we don’t know what emotions are,” as you were saying, “What is the feeling in physical terms? How does a discharging neuron give rise to a feeling of anything at all? This is the ‘hard problem’ of consciousness, the seeming duality of mind and matter, the physical makeup of our innermost self. We can understand in principle how an extremely sophisticated parallel processing system could be capable of wondrous feats of intelligence. But we can’t answer in principle whether such a supreme intelligence would experience joy or melancholy. What is the quantum of solace?”

(01:38:54)
Speaking to the question of emergence, there’s just technical… There’s an excellent paper on this recently about this phase transition emergence of performance in neural networks on problem of NLP, natural language processing. So language models, there seems to be this question of size. At some point, there is a phase transition as you grow the size of the neural network. So the question is, this is somewhat of a technical question that you can philosophize over.

(01:39:32)
The technical question is, is there a size of a neural network that starts to be able to form the kind of representations that can capture a language and therefore be able to not just language, but linguistically capture knowledge that’s sufficient to solve a lot of problems in language? Like be able to have a conversation and there seems to be not a gradual increase, but a phase transition and they’re trying to construct the science of where that is, what is a good size of a neural network and why does such a face transition happen. Anyway, that points to emergence that there could be stages where a thing goes from being you’re very intelligent toaster to a toaster that’s feeling sad today and turns away and looks out the window sighing having an existential crisis.
Nick Lane
(01:40:30)
I’m thinking of Marvin The Paranoid Android.
Lex Fridman
(01:40:33)
Well, no, Marvin is simplistic because Marvin is just cranky.
Nick Lane
(01:40:38)
Yes.
Lex Fridman
(01:40:39)
He’s-
Nick Lane
(01:40:40)
So easily programmed.
Lex Fridman
(01:40:41)
Yeah. Easily programmed. Non-stop existential crisis. You’re almost basically… What is it? Notes From Underground by Dostoevsky like just constantly complaining about life. No, capturing the full rollercoaster of human emotion, the excitement, the bliss, the connection, the empathy, and all that kind of stuff. And then the selfishness, the anger, the depression, all that kind of stuff. Capturing all of that and be able to experience it deeply. It’s the most important thing you could possibly experience today. The highest highs. The lowest lows. This is it. My life will be over. I cannot possibly go on that feeling and then after a nap, you’re feeling amazing. That might be something that emerges.
Nick Lane
(01:41:33)
So why would a nap make an AI being feel better?
Lex Fridman
(01:41:42)
First of all, we don’t know that for a human either, right?
Nick Lane
(01:41:45)
But we do know that that’s actually true for many people much of the time. Maybe you’re utterly depressed and you have a nap and you do in fact feel better.
Lex Fridman
(01:41:53)
Oh, you are actually asking the technical question there is… So there’s a biological answer to that. And so the question is whether AI needs to have the same kind of attachments to its body, bodily function, and preservation of the brain’s successful function. Self-preservation essentially in some deep biological sense.
Nick Lane
(01:42:17)
I mean to my mind it comes back round to the problem we were talking about before about simulations and sensory input and learning what all of this stuff means and life and death. That biology, unlike society, has a death penalty over everything. And natural selection works on that death penalty that if you make this decision wrongly, you die. And the next generation is represented by beings that made a slightly different decision on balance. And that is something that’s intrinsically difficult to simulate in all its richness I would say. So what is-
Lex Fridman
(01:43:09)
Death in all its richness. Our relationship with death or the whole of it? So when you say richness, of course, there’s a lot in that which is hard to simulate. What’s part of the richness that’s hard to simulate?
Nick Lane
(01:43:27)
I suppose the complexity of the environment and your position or the position of an organism in that environment, in the full richness of that environment over its entire life, over multiple generations with changes in gene sequence over those generations. So slight changes in the makeup of those individuals over generations. But if you take it back to the level of single cells, which I do in the book, and ask how does a single cell in effect know it exists as a unit, as an entity. I mean, ‘no’, obviously it doesn’t know anything, but it acts as a unit and it acts with astonishing precision as a unit. And I had suggested that that’s linked to the electrical fields on the membranes themselves and that they give some indication of how am I doing in relation to my environment as a real-time feedback on the world.

(01:44:28)
And this is something physical which can be selected over generations that if you get this wrong, it’s linked with this set of circumstances that I’ve just… As an individual, I have a moment of blind panic and run. As a bacterium or something you have some electrical discharge that says blind panic and it runs whatever it may be. And you associate over generations, multiple generations that this electrical phase that I’m in now is associated with a response like that. And it’s easy to see how feelings come in through the back door almost with that kind of giving real-time feedback on your position in the world in relation to how am I doing?

(01:45:22)
And then you complexify the system and yes, I have no problem with phase transition. And can all of this be done purely by the language, by the issues with how the system understands itself? Maybe it can, I honestly don’t know, but the philosophers for a long time have talked about the possibility that you can have zombie intelligence and that there are no feelings there, but everything else is the same. I mean I have to throw this back to you really. How do you deal with the zombie intelligence?
Lex Fridman
(01:46:03)
So first of all, I can see that from a biologist perspective, you think of all the complexities that led up to the human being, the entirety of the history of four billion years that in some deep sense integrated the human being into this environment and that dance of the organism and the environment. You could see how emotions arise from that and then our emotions are deeply connected and creating a human experience and from that you mix in consciousness and the full mess of it. But from a perspective of an intelligent organism that’s already here like a baby that learns it doesn’t need to learn how to be a collection of cells or how to do all the things he needs to do. The basic function of a baby, as it learns, is to interact with its environment, to learn from its environment, to learn how to fit into the social society.
Nick Lane
(01:47:03)
And the basic response of the baby is to cry a lot of the time.
Lex Fridman
(01:47:07)
Cry. Well maybe convinced the humans to protect it or to discipline it, to teach it, whatever. I mean we’ve developed a bunch of different tricks, how to get our parents to take care of us, to educate us, to teach us about the world. Also, we’ve constructed the world in such a way that it’s safe enough for us to survive in and yet dangerous enough to learn the valuable lessons they are still hard with corners, so we can still run into them. It hurts like hell. So AI needs to solve that problem, not the problem of constructing this super complex organism that leads up to run the whole… To make an apple pie, to build the whole universe. You need to build a whole universe. I think the zombie question is, it’s something I would leave to the philosophers because, and I will also leave to them the definition of love and what happens between two human beings when there’s a magic that just grabs them like nothing else matters in the world.

(01:48:20)
And somehow you’ve been searching for this feeling, this moment, this person your whole life, that feeling. The philosophers can have a lot of fun with that one. And also say that that’s just… You could have a biological explanation, you could have all kinds of… It’s all fake. It’s actually Ayn Rand will say it’s all selfish. There’s a lot of different interpretations. I’ll leave it to the philosophers. The point is the feeling sure as hell feels very real. And if my toaster makes me feel like it’s the only toaster in the world, and when I leave and I miss the toaster and when I come back, I’m excited to see the toaster and my life is meaningful and joyful and the friends I have around me get a better version of me because that toaster exists. That sure as hell feels-
Nick Lane
(01:49:12)
I mean-
Lex Fridman
(01:49:12)
…conscious toaster.
Nick Lane
(01:49:13)
…is that psychologically different to having a dog?
Lex Fridman
(01:49:16)
No.
Nick Lane
(01:49:16)
Because I mean most people would dispute whether we can say a dog… I would say a dog is undoubtedly conscious, but some people would say-
Lex Fridman
(01:49:24)
But there’s degrees of consciousness and so on. But people are definitely much more uncomfortable saying a toaster can be conscious than a dog. And there’s still a deep connection. And you could say our relationship with the dog has more to do with anthropomorphism. Like we kind of project the human being onto it.
Nick Lane
(01:49:42)
Maybe.
Lex Fridman
(01:49:43)
We can do the same damn thing with a toaster.
Nick Lane
(01:49:45)
Yes, but you can look into the dog’s eyes and you can see that it’s sad, that it’s delighted to see you again. I don’t have a dog by the way. It’s not that I’m a dog person. I’m a cat person-
Lex Fridman
(01:49:55)
And dogs are actually incredibly good at using their eyes to do just that.
Nick Lane
(01:49:59)
They are. Now, I don’t imagine that a dog is remotely as close to being intelligent as an AI intelligence, but it’s certainly capable of communicating emotionally with us.
Lex Fridman
(01:50:12)
But here’s what I would venture to say. We tend to think because AI plays chess well and is able to fold proteins now, well that it’s intelligent. I would argue that in order to communicate with humans, in order to have emotional intelligence, it actually requires another order of magnitude of intelligence. It’s not easy to be flawed. Solving a mathematical puzzle is not the same as the full complexity of human-to-human interaction. That’s actually we humans just take for granted the things we’re really good at. Nonstop people tell me how shitty people are at driving. No, humans are incredible at driving. Bipedal walking, walking, object, manipulation. We’re incredible at this. And so people tend to-
Nick Lane
(01:51:04)
Discount the things we all just take for granted.
Lex Fridman
(01:51:07)
And one of those things that they discount is our ability, the dance of conversation and interaction with each other, the ability to morph ideas together, the ability to get angry at each other and then to miss each other, to create a tension that makes life fun and difficult and challenging in a way that’s meaningful, that is a skill that’s learned and AI would need to solve that problem.
Nick Lane
(01:51:33)
I mean, in some sense what you’re saying is AI cannot become meaningfully emotional, let’s say, until it experiences some kind of internal conflict that it’s unable to reconcile these various aspects of reality or its reality with a decision to make. And then it feels sad necessarily because it doesn’t know what to do. I certainly can’t dispute that. That may very well be how it works. I think the only way to find out is to do it and-
Lex Fridman
(01:52:05)
And to build it and leave it to the philosophers if it actually feels sad or not. The point is the robot will be sitting there alone having an internal conflict, an existential crisis, and that’s required for it to have a deep meaningful connection with another human being. Now does it actually feel that? I don’t know.
Nick Lane
(01:52:24)
But I’d like to throw something else at you which troubles me on reading it. Noah Harari’s book 21 Lessons for the 21st Century. And he’s written about this kind of thing on various occasions and he sees biochemistry as an algorithm and then AI will necessarily be able to hack that algorithm and do it better than humans. So there will be AI better at writing music that we appreciate, the Mozart ever could, or writing better than Shakespeare ever did, and so on, because biochemistry is algorithmic and all you need to do is figure out which bits of the algorithm to play to make us feel good or bad or appreciate things. And as a biochemist, I find that argument close to irrefutable and not very enjoyable. I don’t like the sound of it, that’s just my reaction as a human being. You might like the sound of it because that says that AI is capable of the same kind of emotional feelings about the world as we are because the whole thing is an algorithm and you can program an algorithm and there you are. He then has a peculiar final chapter where he talks about consciousness in rather separate terms and he’s talking about meditating and so on and getting in touch with his inner conscious. I don’t meditate, I don’t know anything about that. But he wrote in very different terms about it as if somehow it’s a way out of the algorithm. Now it seems to me that consciousness in that sense is capable of scuppering the algorithm. I think in terms of the biochemical feedback loops and so on, it is undoubtedly algorithmic. But in terms of what we decide to do, it can be much more… Based on an emotion we can just think, ah, I don’t care. I can’t resolve this complex situation.

(01:54:20)
I’m going to do that. And that can be based on in effect a different currency, which is the currency of feelings and something where we don’t have very much personal control over. And then it comes back around to you and what are you trying to get at with AI? Do we need to have some system which is capable of overriding a rational decision which cannot be made because there’s too much conflicting information by effectively an emotional judgmental decision that just says do this and see what happens? That’s what consciousness is really doing in my view.
Lex Fridman
(01:54:53)
Yeah. And the question is whether it’s a different process or just a higher-level process. The idea that biochemistry is an algorithm is to me an oversimplistic view. There’s a lot of things that the moment you say it it’s irrefutable, but it simplifies-
Nick Lane
(01:55:17)
I’m sure it’s an extremely complex-
Lex Fridman
(01:55:18)
…and in the process loses something fundamental. So for example, calling a universe an information processing system. Sure, yes, you can make that. It’s a computer that’s performing computations, but you’re missing the process of the entropy somehow leading to pockets of complexity that creates these beautiful artifacts that are incredibly complex and they’re like machines. And then those machines are through the process of evolution are constructing even further complexity. Like in calling universe information a processing machine, you’re missing those little local pockets and how difficult it’s to create them.

(01:56:05)
So the question to me is if biochemistry is an algorithm, how difficult is it to create a software system that runs the human body, which I think is incorrect? I think that is going to take so long, I mean, that’s going to be centuries from now to be able to reconstruct a human. Now what I would venture to say, to get some of the magic of a human being, what we’re saying with the emotions and the interactions and like a dog makes a smile and joyful and all those kinds of things, that will come much sooner. But that doesn’t require us to reverse engineer the algorithm of biochemistry.
Nick Lane
(01:56:44)
Yes, but the toaster is making you happy.
Lex Fridman
(01:56:47)
Yes.
Nick Lane
(01:56:48)
It’s not about whether you make the toaster happy.
Lex Fridman
(01:56:51)
No, it has to be. It has to be. It has to be. The toaster has to be able to leave me happy.
Nick Lane
(01:56:58)
The toaster has to be happy. Yes. But it’s the toaster is the AI in this case is a very intelligent-
Lex Fridman
(01:57:00)
Yeah. The toaster has to be able to be unhappy and leave me. That’s essential.
Nick Lane
(01:57:06)
Yeah.
Lex Fridman
(01:57:07)
That’s essential for my being able to miss the toaster. If the toaster is just my servant that’s not, or a provider of services like tells me the weather makes toast, that’s not going to deep connection. It has to have internal conflict. You write about life and death. It has to be able to be conscious of its mortality and the finiteness of its existence and that life is for its temporary and therefore it needs to be more selective with the kind of people it hangs out with.
Nick Lane
(01:57:38)
One of the most moving moments in the movies from when I was a boy was the unplugging of HAL in 2001 where that was the death of a sentient being and HAL knew it. So I think we all kind of know that a sufficiently intelligent being is going to have some form of consciousness, but whether it would be like biological consciousness, I just don’t know. And if you’re thinking about how do we bring together, I mean obviously we’re going to interact more closely with AI, but are we really? Is a dog really like a toaster or is there really some kind of difference there? You were talking biochemistry is algorithmic, but it’s not single algorithm and it’s very complex. Of course, it is. So it may be that there are again conflicts in the circuits of biochemistry, but I have a feeling that the level of complexity of the total biochemical system at the level of a single cell is less complex than the level of neural networking in the human brain or in an AI.
Lex Fridman
(01:58:52)
Well, I guess I assumed that we were including the brain in the biochemistry algorithm because you have to-
Nick Lane
(01:58:59)
I would see that as a higher level of organization of neural networks. They’re all using the same biochemical wiring within themselves.
Lex Fridman
(01:59:06)
Yeah. But the human brain is not just neurons, it’s the immune system. It’s the whole package. I mean, to have a biochemical algorithm that runs an intelligent biological system, you have to include the whole damn thing. And it’s pretty fascinating. It comes from an embryo. The whole… I mean boy. I mean if you can… What is the human being? Because it’s-
Nick Lane
(01:59:33)
But if you look-
Lex Fridman
(01:59:34)
…just some code. And then, so DNA doesn’t just tell you what to build, but how to build it. I mean the thing is impressive and the question is how difficult is it to reverse engineer the whole shebang?
Nick Lane
(01:59:52)
Very difficult.
Lex Fridman
(01:59:54)
I would say it’s… I don’t want to say impossible, but it’s much easier to build a human than to reverse engineer… To build a fake human, human-like thing than to reverse engineer the entirety of the process, the evolution of that.
Nick Lane
(02:00:15)
I’m not sure if we are capable of reverse-engineering the whole thing. If the human mind is capable of doing that. I mean I wouldn’t be a biologist if I wasn’t trying, But I know I can’t understand the whole problem. I’m just trying to understand the rudimentary outlines of the problem. There’s another aspect though, you’re talking about developing from a single cell to the human mind and all the subsystems that are part of the immune system and so on. This is something that you’ll talk about I imagine with Michael Levin, but so little is known about… You talk about reverse engineers. So little is known about the developmental pathways that go from a genome to going to a fully wired organism. And a lot of it seems to depend on the same electrical interactions that I was talking about happening at the level of single cells and its interaction with the environment. There’s a whole electrical field side to biology that is not yet written into any of the textbooks, which is about how does an embryo develop into or a single cell develop into these complex systems.

(02:01:32)
What defines the head, what defines the immune system, what defines the brain, and so on? That really is written in a language that we’re only just beginning to understand. And frankly biologists, most biologists are still very reluctant to even get themselves tangled up in questions like electrical fields influencing development. It seems like mumbo jumbo to a lot of biologists and it should not be because this is the 21st century biology. This is where it’s going, but we’re not going to reverse engineer a human being or the mind or any of these subsystems until we understand how this developmental processes work, how electricity and biology really works, and if it is linked with feelings or with consciousness and so on. In the meantime, we have to try, but I think that’s where the answer lies.
Lex Fridman
(02:02:22)
So you think it’s possible that the key to things like consciousness are some of the more tricky aspects of cognition might lie in that early development, the interaction of electricity and biology? Electrical fields, oh God.
Nick Lane
(02:02:40)
But we already know the EEG and so on is telling us a lot about brain function, but we don’t know which cells, which parts of a neural network is giving rise to the EEG. We don’t know the basics. The assumption is, I mean we know it’s neural networks, we know it’s multiple cells, hundreds or thousands of cells involved in it, and we assume that it’s to do with depolarization during action potentials and so on. But the mitochondria which are in there have much more membranes than the plasma membrane of the neuron.

(02:03:08)
And there’s a much greater membrane potential and it’s formed in, very often parallel Christi, which are capable of reinforcing a field and generating fields over longer distances. And nobody knows if that plays a role in consciousness or not. There’s reasons to argue that it could, but frankly, we simply do not know and it’s not taken into consideration. You look at the structure of the mitochondrial membranes in the brains of simple things like Drosophila, the fruit fly, and they have amazing structures. You can see lots of little rectangular things all lined up in amazing patterns. What are they doing? Why are they like that? We haven’t the first clue.
Lex Fridman
(02:03:52)
What do you think about organoids and brain organoids and so in a lab trying to study the development of these in the Petri dish development of organs, do you think that’s promising or do you have to look at the whole systems?
Nick Lane
(02:04:08)
I’ve never done anything like that. I don’t know much about it. The people who I’ve talked to who do work on it say amazing things can happen and a bit of a brain grown in a dish is capable of experiencing some kind of feelings or even memories of its former brain. Again, I have a feeling that until we understand how to control the electrical fields that control development, we’re not going to understand how to turn an organoid into a real functional system.

AI and biology

Lex Fridman
(02:04:36)
But how do we get that understanding? It’s so incredibly difficult. I mean, you would have to… One promising direction, I’d love to get your opinion on this. I don’t know if you’re familiar with the work of DeepMind and AlphaFold with protein folding and so on. Do you think it’s possible that that will give us some breakthroughs in biology trying to basically simulate and model the behavior of trivial biological systems as they become complex biological systems?
Nick Lane
(02:05:11)
I’m sure it will. The interesting thing to me about protein folding is that for a long time, my understanding, this is not what I work on, so I may have got this wrong, but my understanding is that you take the sequence of a protein and you try to fold it, and there are multiple ways in which it can fold. And to come up with the correct confirmation is not a very easy thing because you’re doing it from first principles from a string of letters, which specify the string of amino acids. But what actually happens is when a protein is coming out of a ribosome, it’s coming out of a charged tunnel and it’s in a very specific environment which is going to force this to go there now and then this one to go there and this one to come like that. And so you’re forcing a specific conformational set of changes onto it as it comes out of the ribosome.

(02:05:58)
So by the time it’s fully emerged, it’s already got its shape. And that shape depended on the immediate environment that it was emerging into one letter, one amino acid at a time. And I don’t think that the field was looking at it that way. And if that’s correct, then that’s very characteristic of science, which is to say it asks very often the wrong question and then does really amazingly sophisticated analyses on something having never thought to actually think, well, what is biology doing? And biology is giving you a charged electrical environment that forces you to be this way. Now did DeepMind come up through patterns with some answer that was like that? I’ve got absolutely no idea. It ought to be possible to deduce that from the shapes of proteins. It would require much greater skill than the human mind has. But the human mind is capable of saying, “Well, hang on, let’s look at this exit tunnel and try and work out what shape is this protein going to take.” And we can figure that out.
Lex Fridman
(02:07:00)
Well, that’s really interesting about the exit tunnel. But sometimes we get lucky and just like in science, the simplified view or the static view will actually solve the problem for us. So in this case, it’s very possible that the sequence of letters has a unique mapping to our structure without considering how it unraveled. So without considering the tunnel, that seems to be the case in this situation where the cool thing about proteins, all the different shapes that it can possibly take, it actually seems to take very specific unique shapes given the sequence.
Nick Lane
(02:07:36)
That’s forced on you by an exit tunnel. So the problem is actually much simpler than you thought. And then there’s a whole army of proteins which changed the conformational state, chaperone proteins, and they’re only used when there’s some presumably issue with how it came out of the exit tunnel, and you want to do it differently to that. So very often the chaperone proteins will go there and will influence the way in which it folds. So-
Nick Lane
(02:08:00)
… go there and will influence the way in which it falls. So there’s two ways of doing it. Either you can look at the structures and the sequences of all the proteins, and you can apply an immense mind to it, and figure out what the patterns are and figure out what… Or, you can look at the actual situation where it is and say, “Well, hang on, it was actually quite simple.” It’s got a charged environment and then of course, it’s forced to come out this way. And then, the question will be, “Well, do different ribosomes have different charged environments? What happens if a chaperone…” You’re asking a different set of questions to come to the same answer, in a way which is telling you a much simpler story, and explains why it is. Rather than saying, “It could be. This is one in a billion different possible conformational states that this protein could have,” you’re saying, “Well, it has this one because that was the only one it could take, given its setting.”
Lex Fridman
(02:08:48)
Well, yeah, I mean, currently humans are very good at that kind of first principles thinking, of stepping back.
Nick Lane
(02:08:54)
Yeah.
Lex Fridman
(02:08:54)
But I think AI is really good at collecting a huge amount of data, and a huge amount of data of observation of planets, and figure out that Earth is not at the center of the universe, that there’s actually a sun, we’re orbiting the Sun. But then, you can, as a human being ask, “Well, how do solar systems come to be? What are the different forces that are required to make this kind of pattern emerge?” And then, you start to invent things like gravity. I mean, obviously-
Nick Lane
(02:09:26)
Is it something [inaudible 02:09:26]-
Lex Fridman
(02:09:26)
I mixed up the ordering of gravity wasn’t considered as a thing that connects planets, but we are able to think about those big picture things as human beings. AI is just very good to infer simple models from a huge amount of data. And the question is, with biology, we kind of go back and forth how we solve biology. Listen, protein folding was thought to be impossible to solve. And there’s a lot of brilliant PhD students that worked one protein at a time, trying to figure out the structure, and the fact that it was able to do that…
Nick Lane
(02:10:03)
Oh, I’m not knocking it at all, but I think that people have been asking the wrong question.
Lex Fridman
(02:10:09)
But then, as the people start to ask better and bigger questions, the AI kind of enters the chat and says, “I’ll help you out with that.”
Nick Lane
(02:10:22)
Can I give you another example from my own work? The risk of getting a disease as we get older, there are genetic aspects to it. If you spend your whole life overeating, and smoking, and whatever, that’s a whole separate question, but there’s a genetic side to the risk, and we know a few genes that increase your risk of certain things. And for probably 20 years now, people have been doing what’s called GWAS, which is genome-wide association studies.

(02:10:55)
So you effectively scan the entire genome for any single nucleotide polymorphisms, which is to say a single letter change in one place that has a higher association of being linked with a particular disease or not. And you can come out with thousands of these things across the genome. And if you add them all up and try and say, “Well, so do they add up to explain the known genetic risk of this disease?” And the known genetic risk often comes from twin studies, and you can say that if this twin gets epilepsy, there’s a 40 or 50% risk that the other twin, identical twin, will also get epilepsy. Therefore, the genetic factor is about 50%, and so the gene similarities that you see should account for 50% of that known risk.

(02:11:46)
Very often, it accounts for less than a 10th of the known risk. And there’s two possible explanations, and there’s one which people tend to do, which is to say, “Ah, well, we don’t have enough statistical power. Maybe there’s a million. We’ve only found a 1,000 of them, but if we find the other million, they’re weakly related, but there’s a huge number of them, and so we’ll account for that whole risk.” Maybe there’s a billion of them, [inaudible 02:12:10]. So that’s one way. The other way is to say, “Well, hang on a minute. You’re missing a system here. That system is the mitochondrial DNA,” which people tend to dismiss, because it’s small and it doesn’t change very much.

(02:12:27)
But a few single letter changes in that mitochondrial DNA, it controls some really basic processes. It controls not only all the energy that we need to live, and to move around, and do everything we do, but also biosynthesis, to make the new building blocks, to make new cells. And cancer cells very often take over the mitochondria and rewire them, so that instead of using them for making energy, they’re effectively using them as precursors for the building blocks, for biosynthesis. You need to make new amino acids, new nucleotides for DNA. You want to make new lipids to make your membranes and so on. So they kind of rewire metabolism.

(02:13:06)
Now, the problem is that we’ve got all these interactions between mitochondrial DNA and the genes in the nucleus that are overlooked completely, because people literally throw away the mitochondrial genes, and we can see in fruit flies that they interact and produce big differences in risk. So you can set AI onto this question of exactly how many of these base changes there are, and that’s just one possible solution, that maybe there are a million of them and it does account for the greater part of the risk, or the other one is they aren’t. It’s just not there, that actually the risk lies in something you weren’t even looking at. And this is where human intuition is very important, and there’s this feeling that, “Well, I’m working on this, and I think it’s important, and I’m bloody minded about it.” And in the end, some people are right. It turns out that it was important. Can you get AI to do that, to be bloody minded?
Lex Fridman
(02:14:03)
And that, “Hang on a minute, you might be missing a whole other system here that’s much bigger,” that’s the moment of discovery, of scientific revolution. I’m giving up on saying AI can’t do something. I’ve said it enough times about enough things. I think there’s been a lot of progress. And instead, I’m excited by the possibility of AI helping humans. But at the same time, just like I said, we seem to dismiss the power of humans.
Nick Lane
(02:14:37)
Yes, yes.
Lex Fridman
(02:14:38)
We’re so limited in so many ways that kind of, in what we feel like dumb ways, like we’re not strong, we’re kind of, our attention, our memory is limited, our ability to focus on things is limited, in our own perception of what limited is. But that, actually, there’s an incredible computer behind the whole thing that makes this whole system work. Our ability to interact with the environment, to reason about the environment, there’s magic there.
Nick Lane
(02:14:38)
Yeah.
Lex Fridman
(02:15:14)
And I am hopeful that AI can capture some of that same magic, but that magic is not going to look like a Deep Blue playing chess.
Nick Lane
(02:15:22)
No.
Lex Fridman
(02:15:23)
It’s going to be more interesting.
Nick Lane
(02:15:24)
But I don’t think it’s going to look like pattern finding, either. I mean, that’s essentially what you’re telling me it does very well at the moment. And my point is it works very well where you’re looking for the right pattern, but we are storytelling animals. And a hypothesis is a story. It’s a testable story, but a new hypothesis is a leap into the unknown, and it’s a new story, basically. And it says, “This leads to this, leads to that.” It’s a causal set of storytelling.
Lex Fridman
(02:15:54)
It’s also possible that the leap into the unknown has a pattern of its own.
Nick Lane
(02:15:58)
Yes, it is.
Lex Fridman
(02:15:59)
And it’s possible that it’s learnable.
Nick Lane
(02:15:59)
I’m sure it is. There’s a nice book by Arthur Koestler on the nature of creativity, and he likens it to a joke where the punchline goes off in a completely unexpected direction, and says that this is the basis of human creativity, that some creative switch of direction to an unexpected place is similar to a… I’m not saying that’s how it works, but it’s a nice idea, and there must be some truth in it. Most of the stories we tell are probably the wrong story, and probably going nowhere, and probably not helpful, and we definitely don’t do as well at seeing patterns in things.

(02:16:41)
But some of the most enjoyable human aspects is finding a new story that goes to an unexpected place. And again, these are all aspects of what being human means to me. And maybe these are all things that AI figures out for itself, or maybe they’re just aspects… But I just have the feeling sometimes that the people who are trying to understand what we are like, if we wish to craft an AI system which is somehow human-like, that we don’t have a firm enough grasp of what humans really are like, in terms of how we are built,
Lex Fridman
(02:17:21)
But we get a better understanding of that. I agree with you completely. We try to build a thing and then we go, “Hang on in a minute, there’s another system here.” And that’s, actually, the attempt to build AI that’s human-like is getting us to a deeper understanding of human beings. The funny thing that I recently talked to Magnus Carlsen, widely considered to be the greatest chess player of all time, and he talked about AlphaZero, which is a system from DeepMind that plays chess. And he had a funny comment, he has a kind of dry sense of humor, but he was extremely impressed when he first saw AlphaZero play, and he said that it did a lot of things that could easily be mistaken for creativity.

(02:18:09)
So he refused, as a typical human, refused to give the system sort of its due, because he came up with a lot of things that a lot of people are extremely impressed by, not just the sheer calculation, but the brilliance of play. So one of the things that it does in really interesting ways is it sacrifices pieces. So in chess, that means you basically take a few steps back in order to take a step forward. You give away pieces for some future reward. And that, for us humans, is where art is in chess. You take big risks that, for us humans, those risks are especially painful, because you have a fog of uncertainty before you. So to take a risk now based on intuition of, “I think this is the right risk to take, but there’s so many possibilities,” that that’s where it takes guts. That’s where art is, that’s that danger.

(02:19:14)
And then, AlphaZero takes those same kind of risks, and does them even greater degree, but of course, it does it from where you could easily reduce down to a cold calculation over patterns. But boy, when you see the final result, it sure looks like the same kind of magic that we see, and creativity, when we see creative play on the chess board. But the chess board is very limited, and the question is, as we get better and better, can we do that same kind of creativity in mathematics, in programming, and then eventually in biology, psychology, and expand into more and more complex systems?
Nick Lane
(02:20:04)
I used to go running when I was a boy, and fell running, which is to say running up and down mountains, and I was never particularly great at it, but there were some people who were amazingly fast, especially at running down. And I realized, in trying to do this, that there’s three possible ways of doing it, and there’s only two that work. Either, you go extremely slowly and carefully, and you figure out, “Okay, there’s a stone. I’ll put my foot on this stone, and then there’s a muddy puddle I’m going to avoid.” And it’s slow, it’s laborious. You figure it out, step by step, or you can just go incredibly fast, and you don’t think about it at all. The entire conscious mind is shut out of it, and it’s probably the same playing table tennis or something. There’s something in the mind which is doing a whole lot of subconscious calculations about exactly…

(02:20:54)
And it’s amazing. You can run at astonishing speed down a hillside, with no idea how you did it at all. And then, you panic and you think, “I’m going to break my leg if I keep doing this. I’ve got to think about where I’m going to put my foot.” So you slow down a bit and try to bring this conscious mind in, and then you do, you crash. You cannot think consciously while running downhill. And so it’s amazing how many calculations the mind is able to make.

(02:21:21)
Now, the problem with playing chess or something, if you were able to make all of those subconscious, forward calculations about what is the likely outcome of this move now in the way that we can by running down a hillside or something, it’s partly about what we have adapted to do. It’s partly about the reality of the world that we’re in. Running fast downhill is something that we better be bloody good at, otherwise we’re going to be eaten. Whereas, trying to calculate multiple, multiple moves into the future is not something we’ve ever been called on to do. Two or three, four moves into the future is quite enough for most of us, most of the time.
Lex Fridman
(02:22:00)
Yeah, yeah. So yeah, just solving chess, we may not be as far towards solving the problem of downhill running as we might think, just because we solved chess. Still, it’s beautiful to see creativity. Humans create machines. They’re able to create art, and art on the chessboard and art otherwise. Who knows how far that takes us? So I mentioned Andrej Karpathy earlier. Him and I are big fans of yours. If you’re taking votes, his suggestion was you should write your next book on the Fermi paradox. So let me ask you, on the topic of alien life, since we’ve been talking about life and we’re a kind of aliens, how many alien civilizations are out there, do you think?
Nick Lane
(02:22:58)
Well, the universe is very big, but not as many as most people would like to think is my view, because the idea that there is a trajectory going from simple cellular life like bacteria, all the way through to humans, seems to me there’s some big gaps along that way, that the eukaryotic cell, the complex cell that we have is the biggest of them. But also, photosynthesis is another. Another interesting gap is a long gap from the origin of the eukaryotic cell to the first animals. That was about a billion years, maybe more than that, and a long delay in where oxygen began to accumulate in the atmosphere.

(02:23:42)
So from the first appearance of oxygen in the Great Oxidation Event to enough for animals to respire was close to 2 billion years. Why so long? It seems to be planetary factors. It seems to be geology, as much as anything else, and we don’t really know what was going on. So the idea that there’s a kind of an inevitable march towards complexity and sentient life I don’t think is right. Not to say it’s not going to happen, but I think it’s not going to happen often.
Lex Fridman
(02:24:17)
So if you think of Earth, given the geological constraints and all that kind of stuff, do you have a sense that life, complex life, intelligent life happened really quickly on Earth, or really long? So just to get a sense of are you more sort of saying that it’s very unlikely to get the kind of conditions required to create humans, or is it, even if you have the condition, it’s just statistically difficult?
Nick Lane
(02:24:46)
I think, I mean, the problem, the single great problem at the center of all of that, to my mind, is the origin of the eukaryotic cell, which happened once, and without eukaryotes, nothing else would’ve happened, and that is something that-
Lex Fridman
(02:24:59)
Because you’re saying it’s super important, the eukaryotes, but-
Nick Lane
(02:25:02)
I’m saying tantamount of saying that it is impossible to build something as complex as a human being from bacterial cells.
Lex Fridman
(02:25:09)
Totally agree in some deep, fundamental way, but it’s just like one cell going inside another. Is that so difficult to get to work right, that like [inaudible 02:25:18]-
Nick Lane
(02:25:18)
Well, again, it happened once, and if you think about, I’m in a minority view in this position, most biologists probably wouldn’t agree with me anyway, but if you think about the starting point, we’ve got a simple cell, it’s an archaeal cell, we can be fairly sure about that. So it looks a lot like a bacterium, but is in fact from this other domain of life. So it looks a lot like a bacterial cell. That means it doesn’t have anything. It doesn’t have a nucleus, it doesn’t really have complex endomembrane. It has a little bit of stuff, but not that much, and it takes up an endosymbiont. So what happens next? And the answer is basically everything to do with complexity.

(02:26:02)
To me, there’s a beautiful paradox here. Plants, and animals, and fungi all have exactly the same type of cell, but they all have really different ways of living. So a plant cell is photosynthetic, they started out as algae in the oceans and so on. So think of algal bloom, single-cell things. The basic cell structure that it’s built from is exactly the same, with a couple of small differences. It’s got chloroplasts as well, it’s got a vacuole, it’s got a cell wall, but that’s about it. Pretty much everything else is exactly the same in a plant cell and an animal cell. And yet, the ways of life are completely different. So this cell structure did not evolve in response to different ways of life, different environments. I’m in the ocean doing photosynthesis, I’m on land running around as part of an animal, I’m a fungus in a soil, spinning out long kind of shoots into whatever it may be, mycelium.

(02:27:03)
So they all have the same underlying cell structure. Why? Almost certainly, it was driven by adaptation to the internal environment, of having these pesky endosymbionts that forced all kinds of change on the host cell. Now, in one way, you could see that as a really good thing, because it may be that there’s some inevitability to this process. It’s as soon as you’ve got endosymbionts, you’re more or less bound to go in that direction. Or, it could be that there’s a huge fluke about it, and it’s almost certain to go wrong in just about every case possible, that the conflict will lead to, effectively, war, leading to death and extinction, and it simply doesn’t work out. So maybe it happened millions of times and it went wrong every time, or maybe it only happened once, and it worked out because it was inevitable. And actually, we simply do not know enough now to say which of those two possibilities is true, but both of them are a bit grim.
Lex Fridman
(02:27:52)
But you’re leaning towards we just got really lucky in that one leap. So do you have a sense that our galaxy, for example, has just maybe millions of planets with bacteria living on it?
Nick Lane
(02:28:07)
I would expect billions, tens of billions of planets with bacteria living on it, practically. I mean, there’s probably what, 5 to 10 planets per star, of which I would hope that at least one would have bacteria on. So I expect bacteria to be very common. I simply can’t put a number otherwise. I mean, I expect it will happen elsewhere. It’s not that I think we’re living in a completely empty universe.
Lex Fridman
(02:28:31)
That’s so fascinating.
Nick Lane
(02:28:32)
But I think that it’s not going to happen inevitably, and there’s something… That’s not the only problem with complex life on Earth. I mentioned oxygen, and animals, and so on as well. And even humans, we came along very late. You go back 5 million years, and would we be that impressed if we came across a planet full of giraffes? I mean, you’d think, “Hey, there’s life here. There’s a nice planet to colonize or something.” We wouldn’t think, “Oh, let’s try and have a conversation with this giraffe.”
Lex Fridman
(02:29:00)
Yeah, I’m not sure what exactly we would think. I’m not exactly sure what makes humans so interesting from an alien perspective or how they would notice. I’ll talk to you about cities, too, because an interesting perspective of how to look at human civilization. But your suns… I mean, of course you don’t know, but it’s an interesting world, it’s an interesting galaxy, and it’s an interesting universe to live in, that’s just like every sun, like 90% of solar systems have bacteria in it. Imagine that world, and the galaxy maybe has just a handful, if not one intelligent civilization. That’s a wild world.
Nick Lane
(02:29:00)
It’s a wild world.
Lex Fridman
(02:29:53)
I didn’t even think about that world. There’s a kind of thought that one of the reasons it would be so exciting to find life on Mars, or Titan, or whatever is like if life is elsewhere, then surely, statistically, that life, no matter how unlikely you curry us multicellular organisms, sex, violence, what else is extremely difficult? I mean, photosynthesis, is figuring out some machinery that involves the chemistry and the environment to allow the building up of complex organisms, surely that would arise. But man, I don’t know how I would feel about just bacteria everywhere.
Nick Lane
(02:30:38)
Well, it would be depressing, if it was true. I suppose, depressing-
Lex Fridman
(02:30:42)
[inaudible 02:30:42].
Nick Lane
(02:30:42)
I don’t think-
Lex Fridman
(02:30:43)
I don’t know what’s more depressing, bacteria everywhere, nothing everywhere.
Nick Lane
(02:30:47)
Yes, either of them are chilling. But whether it’s chilling or not I don’t think should force us to change our view about whether it’s real or not.
Lex Fridman
(02:30:57)
Yes, yes.
Nick Lane
(02:30:58)
And what I’m saying may or may not be true.
Lex Fridman
(02:31:00)
So how would you feel if we discovered life on Mars? It sounds like you would be less excited than some others, because you’re like, “Well…”
Nick Lane
(02:31:09)
What I would be most interested in is how similar to life on Earth it would be. It would actually turn into quite a subtle problem, because the likelihood of life having gone to and fro between Mars and the Earth is quite… I wouldn’t say high, but it’s not low. It’s quite feasible. And so if we found life on Mars and it had very similar genetic code, but it was slightly different, most people would interpret that immediately as evidence that there’d been transit one way or the other, and that it was a common origin of life on Mars or on the Earth, and it went one way or the other way.

(02:31:43)
The other way to see that question, though, would be to say, “Well, actually the whole beginnings of life lie in deterministic chemistry and thermodynamics, starting with the most likely abundant materials, CO₂, and water, and wet, rocky planet,” and Mars was wet and rocky at the beginning and will, I won’t say inevitably, but potentially almost inevitably come up with a genetic code which is not very far away from the genetic code that we already have. So we see subtle differences in the genetic code, what does it mean? It could be very difficult to interpret.
Lex Fridman
(02:32:14)
Is it possible, do you think, to tell the difference of something that truly originated…
Nick Lane
(02:32:19)
I think if the stereochemistry was different, we have sugars, for example, that are the L form or the D form, and we have D sugars and L amino acids right across all of life. But lipids, the bacteria have one stereoisomer and the bacteria have the other, the opposite stereoisomer. So it’s perfectly possible to use one or the other one. And the same would almost certainly go for… And I think George Church has been trying to make life based on the opposite stereoisomer. So it’s perfectly possible to do, and it will work. And if we were to find life on Mars that was using the opposite stereoisomer, that would be unequivocal evidence that life had started independently there.
Lex Fridman
(02:33:09)
So hopefully, the life we find will be on Titan, on Europa or something like that, where it’s less likely that we shared… And it’s harsher conditions, so there’s going to be weirder kind of life?
Nick Lane
(02:33:20)
I wouldn’t count on that, because-
Lex Fridman
(02:33:22)
Of water.
Nick Lane
(02:33:22)
… life started in deep sea hydrothermal vents here.
Lex Fridman
(02:33:22)
It’s a harsh-
Nick Lane
(02:33:27)
It’s pretty harsh, yeah. So Titan is different. Europa is probably quite similar to Earth, in the sense that we’re dealing with an ocean. It’s an acidic ocean there, as the early Earth would’ve been. And it almost certainly has hydrothermal systems. Same with Enceladus. We can tell that from these plumes coming from the surface, through the ice. We know there’s a liquid ocean and we can tell roughly what the chemistry is. For Titan, we’re dealing with liquid methane and things like that. So that would really, if there really is life there, it would really have to be very, very different to anything that we know on Earth.

Evolution

Lex Fridman
(02:34:00)
So the hard leap, the hardest leap, the most important leap is from prokaryotes to eukaryotes. What’s the second, if we were ranking? You gave a lot of emphasis on photosynthesis.
Nick Lane
(02:34:17)
Yeah, and that would be my second one, I think. But it’s not so much… I mean, photosynthesis is part of the problem. It’s a difficult thing to do. Again, we know it happened once, we don’t know why it happened once, but the fact that it was kind of taken on board completely by plants, and algae, and so on as chloroplasts, and did very well in completely different environments, and then on land and whatever else, seems to suggest that there’s no problem with exploring. You could have a separate origin that explored this whole domain over there that the bacteria had never gone into.

(02:34:59)
So that kind of says that the reason that it only happened once is probably because it’s difficult, because the wiring is difficult. But then, it happened at least 2.2 billion years ago, right before the GOE, maybe as long as 3 billion years ago, when some people say there are whiffs of oxygen, there’s just kind of traces in the fossil, in the geochemical record that say maybe there was a bit of oxygen then. That’s really disputed. Some people say it goes all the way back 4 billion years ago, and that it was the common ancestor of life on Earth was photosynthetic. So immediately, you’ve got groups of people who disagree over a 2 billion-year period of time about when it started.

(02:35:41)
But let’s take the latest date when it’s unequivocal. That’s 2.2 billion years ago, through to around about the time of the Cambrian explosion, when oxygen levels definitely got close to modern levels, which was around about 550 million years ago. So we’ve gone more than one and a half billion years, where the Earth was in stasis. Nothing much changed. It’s known as the Boring Billion, in fact. Probably, stuff was… That was when Eukaryotes arose somewhere in there, but it’s… So this idea that the world is constantly changing, that we’re constantly evolving, that we’re moving up some ramp, it’s a very human idea, but in reality, there are kind of tipping points to a new stable equilibrium, where the cells that are producing oxygen are precisely counterbalanced by the cells that are consuming that oxygen, which is why it’s 21% now and has been that way for hundreds of millions of years. We have a very precise balance.

(02:36:46)
You go through a tipping point, and you don’t know where the next stable state’s going to be, but it can be a long way from here. And so if we change the world with global warming, there will be a tipping point. Question is where, and when, and what’s the next stable state? It may be uninhabitable to us. It’ll be habitable to life, for sure, but there may be something like the Permian extinction, where 95% of species go extinct, and there’s a 5-to-10 million year gap, and then life recovers, but without humans.
Lex Fridman
(02:37:16)
And the question, statistically, well, without humans, but statistically, does that ultimately lead to greater complexity, more interesting life, more intelligent life?
Nick Lane
(02:37:25)
Well, after the first appearance of oxygen with the GOE, there was a tipping point which led to a long-term stable state that was equivalent to the Black Sea today, which is to say oxygenated at the very surface and stagnant, sterile… Not sterile, but sulfurous lower down. And that was stable, certainly around the continental margins, for more than a billion years. It was not a state that led to progression in an obvious way.
Lex Fridman
(02:37:55)
Yeah, I mean, it’s interesting to think about evolution, like what leads to stable states, and how often are evolutionary pressures emerging from the environment? So maybe other planets are able to create evolutionary pressures, chemical pressures, whatever, some kind of pressure that say, “You’re screwed unless you get your shit together in the next 10,000 years.”
Nick Lane
(02:38:23)
Yeah.
Lex Fridman
(02:38:23)
Like, a lot of pressure. It seems like Earth, the Boring Billion might be explained in two ways. One, it’s super difficult to take any kind of next step. And the second way it could be explained is there’s no reason to take the next step.
Nick Lane
(02:38:39)
No, I think there is no reason. But at the end of it, there was a snowball Earth. So there was a planetary catastrophe on a huge scale, where the ice, the sea was frozen at the equator, and that forced change in one way or another. It’s not long after that, a hundred million years, perhaps after that, so not a short time, but this is when we begin to see animals. There was a shift, again, another tipping point that led to catastrophic change that led to a takeoff then. We don’t really know why, but one of the reasons why that I discussed in the book is about sulfate being washed into the oceans, which sounds incredibly parochial.

(02:39:23)
But the issue is, I mean, what the data is showing, we can track roughly how oxygen was going into the atmosphere from carbon isotopes. So there’s two main isotopes of carbon that we need to think about here. One is carbon-12, 99% of carbon is carbon-12, and then 1% of carbon is carbon-13, which is a stable isotope. And then, there’s carbon-14, which is a trivial radioactive, it’s trivial amount. So carbon-13 is 1%, and life and enzymes, generally, you can think of carbon atoms as little balls bouncing around, ping-pong balls bouncing around. Carbon-12 moves a little bit faster than carbon-13.
Nick Lane
(02:40:00)
… bouncing around, ping-pong balls bouncing around. carbon-12 moves a little bit faster than carbon-13 because it’s lighter and it’s more likely to encounter an enzyme, and so it’s more likely to be fixed into organic matter. Organic matter is enriched, and this is just an observation. It’s enriched in Carbon-12 by a few percent compared to carbon-13 relative to what you would expect if it was just equal. If you then bury organic matter as coal or oil or whatever it may be, then it’s no longer oxidized. Some oxygen remains left over in the atmosphere and that’s how oxygen accumulates in the atmosphere.

(02:40:37)
You can work out historically how much oxygen there must’ve been in the atmosphere by how much carbon was being buried. You think, well, how can we possibly know how much carbon was being buried? The answer is, well, if you’re burying carbon-12, what you’re leaving behind is more Carbon-13 in the oceans, and that precipitates out into limestone. You can look at limestones over these ages and work out what’s the Carbon-13 signal. That gives you a feedback on what the oxygen content.

(02:41:03)
Right before the Cambrian explosion, there was what’s called a negative isotope anomaly excursion, which is basically the carbon-13 goes down by a massive amount and then back up again 10 million years later. What that seems to be saying is the amount of carbon-12 in the oceans was disappearing, which is to say it was being oxidized. If it’s being oxidized, it’s consuming oxygen and that should … A big carbon-13 signal says the ratio of carbon-12 to carbon-13 is really going down, which means there’s much more carbon-12 being taken out and being oxidized.

(02:41:44)
Sorry, this is getting too complex, but-
Lex Fridman
(02:41:46)
Well, it’s a good way to estimate the amount of oxygen.
Nick Lane
(02:41:49)
If you calculate the amount of oxygen based on the assumption that all this carbon-12 that’s being taken out is being oxidized by oxygen, the answer is all the oxygen in the atmosphere gets stripped out, there is none left. Yet the rest of the geological indicators say, no, there’s oxygen in the atmosphere. It’s a paradox and the only way to explain this paradox just on mass balance of how much stuff is in the air, how much stuff is in the oceans and so on, is to assume that oxygen was not the oxygen, it was sulfate. Sulfate was being washed into the oceans. It’s used as an electron acceptor by sulfate-reducing bacteria just as we use oxygen as an electron acceptor, so they pass their electrons to sulfate instead of oxygen.
Lex Fridman
(02:42:32)
Bacteria did?
Nick Lane
(02:42:33)
Yeah, so these are bacteria. They’re oxidizing carbon, organic carbon with sulfate passing the electrons onto sulfate, that reacts with iron to form iron pyrites or fool’s gold, sinks down to the bottom, gets buried out of the system. This can account for the mass balance. Why does it matter? It matters because what it says is there was a chance event. Tectonically, there was a lot of sulfate sitting on land as some kind of mineral. Calcium sulfate minerals, for example are evaporitic and because there happened to be some continental collisions, mountain building, the sulfate was pushed up the side of a mountain and happened to get washed into the ocean.
Lex Fridman
(02:43:24)
I wonder how many happy accidents like that are possible.
Nick Lane
(02:43:27)
Yeah, statistically it’s really hard. Maybe you can rule that in statistically or rule out, but this is the course of life on Earth. Without all that sulfate being raised up, the Cambrian explosion almost certainly would not have happened and then we wouldn’t have had animals and so on and so on.
Lex Fridman
(02:43:44)
This explanation of the Cambrian explosion. Let me actually say in several ways, so folks who challenge the validity of the theory of evolution will give us an example that the Cambrian explosion is like this thing is weird. Now I’m not well studied in this.
Nick Lane
(02:44:02)
Oh, it’s weird. Yeah.
Lex Fridman
(02:44:11)
The question I would have is what’s the biggest mystery or gap in understanding about evolution? Is it the Cambrian explosion? If so, first of all, what is it? In my understanding, in the short amount of time, maybe 10 million years, 100 million years, something like that, a huge number of animals, a variety, diversity of animals were created. Anyway, there’s five questions in there. Is that the biggest mystery to you about evolution?
Nick Lane
(02:44:44)
No, I don’t think it’s particularly a big mystery really anymore. There are still mysteries about why then? I’ve just said being washed into the oceans is one. It needs oxygen and oxygen levels rose around that time. Probably before that, they weren’t high enough for animals. What we’re seeing with the Cambrian explosion is the beginning of predators and prey relationships. We’re seeing modern ecosystems and we’re seeing arms races, and we’re seeing the full creativity of evolution unleashed. I talked about the boring billion. Nothing happens for one and a half, one billion years, one and a half billion years.

(02:45:29)
The assumption and this is completely wrong, this assumption is then that evolution works really slowly and that you need billions of years to affect some small change and then another billion years to do something. It’s completely wrong. Evolution gets stuck in a stasis and it stays that way for tens of millions, hundreds of millions of years. Stephen J. Gould used to argue this, he called it punctuated equilibrium, but he was doing it to do with animals and to do with the last 500 million years or so where it’s much less obvious than if you think about the entire planetary history. Then you realize that the first 2 billion years was bacteria only. You have the origin of life, 2 billion years of just bacteria, oxygenic, photosynthesis arising here. Then you have a global catastrophe, snowball Earths, and great oxidation event, and then another billion years of nothing happening, and then some period of upheavals and then another snowball Earth. Then suddenly you see the Cambrian explosion.

(02:46:23)
This is long periods of stasis where the world is in a stable state and is not geared towards increasing complexity. It’s just everything is in balance. Only when you have a catastrophic level of global level problem, like of snowball Earth, it forces everything out of balance and there’s a tipping point and you end up somewhere else. Now, the idea that evolution is slow is wrong. It can be incredibly fast. I mentioned earlier on that in theory it would take half a million years to invent an eye, for example, from a light sensitive spot. It doesn’t take long to convert one kind of tube into a tube with nobbles on it into a tube with arms on it, and then multiple arms, and then one end is a head with it starts out as a swelling. It’s not difficult intellectually to understand how these things can happen.

(02:47:18)
It boggles the mind that it can happen so quickly, but we’re used to human time scales. What we need to talk about is generations of things that live for a year in the ocean, and then a million years is a million generations. The amount of change that you can do can affect in that period of time is enormous. We’re dealing with large populations of things where selection is sensitive to pretty small changes. Again, as soon as you throw in the competition of predators and prey and you’re ramping up the scale of evolution, it’s not very surprising that it happens very quickly when the environment allows it to happen.

(02:47:58)
I don’t think there’s a big mystery. There’s lots of details that need to be filled in. The big mystery in biology is consciousness.
Lex Fridman
(02:48:11)
The big mystery in biology is consciousness? Well, intelligence is a mystery too. You said biology, not psychology, because from a biology perspective, it seems like intelligence and consciousness all are the same weird, all the brain stuff.
Nick Lane
(02:48:37)
I don’t see intelligence as necessarily that difficult, I suppose. I see it as a form of computing, and I don’t know much about computing.
Lex Fridman
(02:48:46)
Well, you don’t know much about consciousness either. Oh, I see, I see, I see, I see. That consciousness you do know a lot about as a human being.
Nick Lane
(02:49:00)
No, no. I can understand the wiring of a brain in pretty much the same way as a computer in theory, in terms of the circuitry of it. The mystery to me is how this system gives rise to feelings, as we were talking about earlier on.
Lex Fridman
(02:49:23)
Yeah, I think we oversimplify intelligence. I think the dance, the magic of reasoning is as interesting as the magic of feeling. We tend to think of reasoning as running a very simplistic algorithm. I think reasoning is the interplay between memory, whatever the hell is going on, the unconscious mind, all of that.
Nick Lane
(02:49:55)
I’m not trying to diminish it in any way at all. Obviously, it’s extraordinarily exquisitely complex, but I don’t see a logical difficulty with how it works.
Lex Fridman
(02:50:06)
Yeah, no, I agree with you, but sometimes, yeah, there’s a big cloak of mystery around consciousness.
Nick Lane
(02:50:16)
Let me compare it with classical versus quantum physics. Classical physics is logical and you can understand the language we’re dealing with. It’s almost at the human level, we’re dealing with stars and things that we can see. When you get to quantum mechanics and things, it’s practically impossible for the human mind to compute what just happened there.
Lex Fridman
(02:50:39)
Yeah, that is the same. It’s like you understand mathematically the notes of a musical composition, that’s intelligence. But why it makes you feel a certain way? That is much harder to understand. Yeah, that’s really, but it was an interesting framing that that’s a mystery at the core of biology. I wonder who solves consciousness. I tend to think consciousness will be solved by the engineer, meaning the person who builds it, who keeps trying to build the thing, versus biology is such a complicated system. I feel like the building blocks of consciousness from a biological perspective are that’s the final creation of a human being, so you have to understand the whole damn thing. You said the electrical fields, but electrical fields plus, plus everything, the whole shebang.
Nick Lane
(02:51:47)
I’m inclined to agree. My feeling is from my meager knowledge of the history of science is that the biggest breakthrough has usually come through from a field that was not related to. If anyone is not going to be a biologist who solves consciousness, just because biologists are too embedded in the nature of the problem. Then nobody’s going to believe you when you’ve done it because nobody’s going to be able to prove that this AI is in fact conscious and sad in any case and any more than you can prove that a dog is conscious and sad, so it tells you that it is in good language and you must believe it.

(02:52:24)
But I think most people will accept if faced with that, that that’s what it is. All of this probability though of complex life. In one way, I think why it matters is that my expectation I suppose is that we will be over the next 100 years or so, if we survive it all, that AI will increasingly dominate. Pretty much anything that we put out into space looking for the universe, for what’s out there will be AI. It won’t be us, we won’t be doing that, or when we do, it will be on a much more limited scale. I suppose the same would apply to any alien civilization.

(02:53:12)
Perhaps rather than looking for signs of life out there, we should be looking for AI out there, but then we face the problem that I don’t see how a planet is going to give rise directly to AI. I can see how a planet can give rise directly to organic life, and if the principles that govern the evolution of life on Earth apply to other planets as well. I think a lot of them would, then the likelihood of ending up with a human-like civilization capable of giving rise to AI in the first place is massively limited. Once you’ve done it once, perhaps it takes over the universe and maybe there’s no issue, but it seems to me that the two are necessarily linked, that you’re not going to just turn a sterile planet into an AI life form without the intermediary of the organics first.
Lex Fridman
(02:54:09)
You have to run the full evolutionary computation with your organics to create AI?
Nick Lane
(02:54:15)
How does AI bootstrap itself up without the aid, if you like, of an intelligent designer?

Fermi paradox

Lex Fridman
(02:54:20)
The origin of AI is going to have to be in the chemistry of a planet, but that’s not a limiting factor. Let me ask the Fermi Paradox question. Let’s say we live in this incredibly dark and beautiful world of just billions of planets with bacteria on it and very few intelligent civilizations, and yet there’s a few out there. Why haven’t we at scale seen them visit us? What’s your sense? Is it because they don’t exist? It it because-
Nick Lane
(02:55:02)
Well, they don’t exist in the right part of the universe at the right time. That’s the simplest answer for it.
Lex Fridman
(02:55:08)
Is that the one you find the most compelling or is there some other explanation?
Nick Lane
(02:55:14)
No, it’s not that I find it more compelling, it’s that I find more probable and I find all of them. There’s a lot of hand waving in this, we just don’t know. I’m trying to read out from what I know about life on Earth to what might happen somewhere else. It gives to my mind a bit of a pessimistic view of bacteria everywhere and only occasional intelligent life. Running forward, humans only once on Earth and nothing else that you would necessarily be any more excited about making contact with than you would be making contact with them on Earth.

(02:55:50)
I think the chances are pretty limited and the chances of us surviving are pretty limited too. The way we’re going on at the moment, the likelihood of us not making ourselves extinct within the next few 100 years, possibly within the next 50 or 100 years seems quite small. I hope we can do better than that. Maybe the only thing that will survive from humanity will be AI and maybe AI once it exists, and once it’s capable of effectively copying itself and cutting humans out of the loop, then maybe that will take over the universe.
Lex Fridman
(02:56:24)
There’s an inherent sadness to the way you described that, but isn’t that also potentially beautiful that that’s the next step of life? I suppose from your perspective, as long as it carries the flame of consciousness somehow.
Nick Lane
(02:56:41)
I think yes, there can be some beauty to it being the next step of life. I don’t know if consciousness matters or not from that point of view, to be honest with you, but there’s some sadness, yes, probably because I think it comes down to the selfishness that we were talking about earlier on. I am an individual with a desire not to be displaced from life. I want to stay alive, I want to be here. I suppose the threat that a lot of people would feel is that we will just be wiped out, so there will be potential conflicts between AI and humans, and that AI will win because it’s a lot smarter.
Lex Fridman
(02:57:25)
Boy, would that be a sad state of affairs if consciousness is just an intermediate stage between bacteria and AI.
Nick Lane
(02:57:34)
Well, I would see bacteria as being potentially a primitive form of consciousness anyway. The whole of life on Earth to my mind-
Lex Fridman
(02:57:43)
Is conscious.
Nick Lane
(02:57:44)
… Is capable of some form of feelings in response to the environment. That’s not to say it’s intelligent, though it’s got his own algorithms for intelligence, but nothing comparable with us. I think it’s beautiful what a sterile planet can come up with. It’s astonishing that it’s come up with all of this stuff that we see around us and that either we or whatever we produce is capable of destroying all of that is a sad thought, but it’s also hugely pessimistic. I’d like to think that we’re capable of giving rise to something which is at least as good, if not better than us as AI.
Lex Fridman
(02:58:24)
Yeah, I have that same optimism, especially a thing that is able to propagate throughout the universe more efficiently than humans can or extensions of humans, some merger with AI and humans, whether that comes from bioengineering of the human body to extend its life somehow to carry that flame of consciousness and that personality and the beautiful tension that’s within all of us, carry that through to multiple planets, to multiple solar systems all out there in the universe. That’s a beautiful vision. Whether AI can do that or bio engineered humans can, that’s an exciting possibility. Especially meeting other alien civilizations in that same way.

(02:59:14)
Do you think aliens have consciousness?
Nick Lane
(02:59:16)
If they’re organic, yes.
Lex Fridman
(02:59:18)
Organic, connected to consciousness?
Nick Lane
(02:59:20)
I think any system which is going to bootstrap itself up from planetary origins. Let me finish this and then I come onto something else … but from planetary origins is going to face similar constraints, and those constraints are going to be addressed in similar basic engineering ways. I think it will be cellular, and I think it will have electrical charges, and I think it will have to be selected in populations over time. All of these things will tend to give rise to the same processes as the simplest fix to a difficult problem. I would expect it to be conscious, yes, and I would expect it to resemble life on Earth in many ways. When I was about 15 or 16, I remember reading a book by Fred Hoyle called The Black Cloud, which I was a budding biologist at the time and this was the first time I’d come across someone really challenging the heart of biology and saying, “You are far too parochial. You’re thinking about life as carbon-based. Here’s a life form which is kind of dust, interstellar dust on a solar system scale.”

(03:00:28)
It’s a novel, but I felt enormously challenged by that novel because it hadn’t occurred to me how limited my thinking was, how narrow-minded I was being. Here was a great physicist with a completely different conception of what life be. Since then, I’ve seen him attacked in various ways. I’m reluctant to say the attacks make more sense to me than the original story, which is to say even in terms of information processing, if you’re on that scale and there’s a limit of the speed of how quickly can something think, if you’re needing to broadcast across the solar system, it is going to be slow.

(03:01:16)
It’s not going to hold a conversation with you on the timelines that Fred Hoyle was imagining, or at least not by any easy way of doing it, assuming that the speed of light is a limit. Then again, you really can’t. This is something Richard Dawkins argued long ago and I do think he’s right. There is no other way to generate this level of complexity than natural selection. Nothing else can do it. You need populations and you need selection in populations and an isolated interstellar cloud. Again, there’s unlimited time and maybe there’s no problems with distance, but you need to have a certain frequency of generational time to generate a serious level of complexity. I just have a feeling it’s never going to work.
Lex Fridman
(03:02:11)
Well, as far as we know. Natural selection, evolution is really a powerful tool here on Earth, but there could be other mechanisms. I don’t know if you’re familiar with cellular automaton, but complex systems that have really simple components and seemingly move based on simple rules when they’re taken as a whole, really interesting complexity emerges. I don’t know what the pressures on that are. It’s not really selection, but interesting complexity seems to emerge, and that’s not well understood exactly why that complexity emerges.
Nick Lane
(03:02:46)
I think there’s a difference between complexity and evolution. Some of the work we’re doing on the origin of life is thinking about how do genes arise? How does information arise in biology? Thinking about it from the point of view of reacting CO₂ with hydrogen, what do you get? Well, what you’re going to get is carboxylic acids, then amino acids. It’s quite hard to make nucleotides. It’s possible to make them, and it’s been done and it’s being done following this pathway as well, but you make trace amounts. The next question, assuming that this is the right way of seeing the question, which maybe it’s just not, but let’s assume it is well, how do you reliably make more nucleotides? How do you become more complex and better at becoming a nucleotide generating machine? The answer is, well, you need positive feedback loops, some form of autocatalysis.

(03:03:40)
That can work and we know it happens in biology. If this nucleotide, for example, catalyzes CO₂ fixation, then you’re going to increase the rate of flux through the whole system, and you’re going to effectively steepen the driving force to make more nucleotides. This can be inherited because there are forms of membrane heredity that you can have and there are effectively, if a cell divides in two and it’s got a lot of stuff inside it and that stuff is basically bound as a network which is capable of regenerating itself, then it will inevitably regenerate itself.

(03:04:17)
You can develop greater complexity, but everything that I’ve said depends on the underlying rules of thermodynamics. There is no evolvability about that. It’s simply an inevitable outcome of your starting point, assuming that you’re able to increase the driving force through the system. You will generate more of the same, you’ll expand on what you can do, but you’ll never get anything different than that. It’s only when you introduce information into that as a gene, as a small stretch of RNA, which can be random stretch, then you get real evolvability. Then you get biology as we know it, but you’ll also have selection as we know it.
Lex Fridman
(03:05:00)
Yeah. I don’t know how to think about information. That’s the memory of the system. At the local level, it’s propagation of copying yourself and changing and improving your adaptability to the environment, but if you look at Earth as a whole, it has a memory. That’s the key feature of it.
Nick Lane
(03:05:25)
In what way?
Lex Fridman
(03:05:27)
It remembers the stuff it tries. If you were to describe Earth, I think evolution is something that we experience as individual organisms. That’s how the individual organisms interact with each other, there’s a natural selection. But when you look at Earth as an organism in its entirety, how would you describe it?
Nick Lane
(03:05:56)
Well, not as an organism. The idea of Gaia is lovely and James Lovelock originally put Gaia out as an organism that had somehow evolved and he was immediately attacked by lots of people. He’s not wrong, but he backpedaled somewhat because that was more of a poetic vision than the science. The science is now called Earth systems science, and it’s really about how does the world regulate itself so it remains within the limits which are hospitable to life, and it does it amazingly well. It is working at a planetary level of integration of regulation, but it’s not evolving by natural selection. It can’t because there’s only one of it. It can change over time, but it’s not evolving. All the evolution is happening in the parts of the system.
Lex Fridman
(03:06:50)
Yeah, but it’s a self-sustaining organism.
Nick Lane
(03:06:53)
No, it’s self-sustained by the sun.
Lex Fridman
(03:06:56)
Right, you don’t think it’s possible to see Earth as its own organism?
Nick Lane
(03:07:03)
I think it’s poetic and beautiful, and I often refer to the Earth as a living planet, but it’s not in biological terms an organism, no.
Lex Fridman
(03:07:14)
If aliens were to visit Earth, what would they notice? What would be the basic unit of light that would notice?
Nick Lane
(03:07:24)
Trees probably, it’s green and it’s green and blue. I think that’s the first thing you’d notice is it stands out from space as being different to any of the other planets.
Lex Fridman
(03:07:33)
Would notice the trees at first because the green?
Nick Lane
(03:07:36)
Well, I would. I notice the green, yes.
Lex Fridman
(03:07:38)
Yeah. Then probably notice to figure out the photosynthesis and then-
Nick Lane
(03:07:43)
Probably notice cities a second there, I suspect. Maybe first. If they arrived at night, they noticed cities first, that’s for sure.

Cities

Lex Fridman
(03:07:50)
Yeah, it depends the time. You write quite beautifully in Transformers. Once again, I think you opened the book in this way. I don’t remember. From space describing Earth, it’s such an interesting idea of what Earth is. Hitchhiker’s Guide summarizing it as harmless or mostly harmless. It’s a beautifully poetic thing.

(03:08:15)
You open Transformers with “From space, it looks gray and crystalline, obliterating the blue-green colors of the living Earth. It is crisscrossed by regular patterns and convergence striations. There’s a central amorphous density where these scratches seem lighter. This ‘growth’ does not look alive, although it has extended out along some lines and there is something grasping and parasitic about it. Across the globe, there are thousands of them varying in shape and detail, but all of them, gray, angular, inorganic, spreading. Yet at night they light up, glowing up the dark sky, suddenly beautiful. Perhaps these cankers on the landscape are in some sense living. There’s a controlled flow of energy. There must be information and some form of metabolism, some turnover of materials. Are they alive? No, of course not. They are cities.”

(03:09:17)
Is there some sense that cities are living beings? You think aliens would think of them as living beings?
Nick Lane
(03:09:25)
Well, it’d be easy to see it that way, wouldn’t it?
Lex Fridman
(03:09:29)
It wakes up at night, they wake up at night.
Nick Lane
(03:09:33)
Strictly nocturnal, yes. I imagine that any aliens that are smart enough to get here would understand that they’re not living beings. My reason for saying that is that we tend to think of biology in terms of information and forget about the cells. I was trying to draw a comparison between the cell as a city and the energy flow through the city and the energy flow through cells and the turnover of materials. An interesting thing about cities is that they’re not really exactly governed by anybody. There are regulations and systems and whatever else, but it’s pretty loose. They have their own life, their own way of developing over time.

(03:10:24)
In that sense, they’re quite biological. There was a plan after the Great Fire of London, Christopher Wren was making plans not only for St. Paul’s Cathedral, but also to rebuild in large Parisian-type boulevards, a large part of the area of central London that was burnt. It never happened because they didn’t have enough money I think, but it’s interesting what was in the plan. There were all these boulevards, but there were no pubs and no coffee houses or anything like that. The reality was London just grew up in a set of jumbled streets.

(03:11:03)
It was the coffee houses and the pubs where all the business of the City of London was being done. That was where the real life of the city was. No one had planned it. The whole thing was unplanned and works much better that way. In that sense, the cell is completely unplanned. It’s not controlled by the genes in the nucleus in the way that we might like to think that it is, but it’s an evolved entity that has the same flux, the same animation, the same life. I think it’s a beautiful analogy, but I wouldn’t get too stuck with it as a metaphor.
Lex Fridman
(03:11:32)
See, I disagree with you. I disagree with you. I think you are so steeped. Actually, the entirety of science, the history of science is steeped in a biological framework of thinking about what is life. Not just biological, is very human-centric too, that the human organism is the epitome of life on Earth. I don’t know, I think there is some deep fundamental way-
Lex Fridman
(03:12:00)
On earth, I don’t know. I think there is some defundimental way in which a city is a living being in the same way that a-
Nick Lane
(03:12:10)
It doesn’t give rise to an offspring city. So it doesn’t work by natural selection, it works by, if anything, memes, it works by.
Lex Fridman
(03:12:19)
Yeah. But isn’t it-
Nick Lane
(03:12:20)
Copying itself conceptually as a mode of being?
Lex Fridman
(03:12:24)
So, maybe memes, maybe ideas are the organisms that are really essential to life on Earth. Maybe it’s much more important about the collective aspect of human nature, the collective intelligence than the individual intelligence. Maybe the collective humanity is the organism and the thing that defines the collective intelligence of humanity is the ideas. And maybe the way that manifests itself is cities maybe, or societies or geographically constrained societies or nations and all that kind of stuff. From an alien perspective, it’s possible that that is the more deeply noticeable thing, not from a place of ignorance.
Nick Lane
(03:13:08)
Yes, but what’s noticeable doesn’t tell you how it works. I don’t have any problem with what you’re saying really, except that it’s not possible without the humans. We went from a hunter-gatherers type economy, if you like, without cities, through to cities. And as soon as we get into human evolution and culture and society and so on, then yes, there are other forms of evolution, other forms of change. But cities don’t directly propagate themselves, they propagate themselves through human societies. And human societies only exist because humans as individuals propagate themselves. So there is a hierarchy there. And without the humans in the first place, none of the rest of it exists.
Lex Fridman
(03:13:54)
So for you, life is primarily defined by the basic unit on which evolution can operate on Earth.
Nick Lane
(03:14:02)
I think it’s really important thing. Yes.
Lex Fridman
(03:14:04)
Yeah. And we don’t have any other better ideas than evolution for how to create life.
Nick Lane
(03:14:10)
I never came across a better idea than evolution. Maybe I’m just ignorant and I don’t know. And you mentioned that’s automator and so on, and I don’t think specifically about that, but I have thought about it in terms of selective units at the origin of life and the difference between evolvability and complexity or just increasing complexity, but within very narrowly defined limits. The great thing about genes and about selection is it just knocks down all those limits. It gives you a world of information in the end, which is limited only by the biophysical reality of what kind of an organism you are, what kind of a planet you live on and so on. And cities and all these other forms that look alive and could be described as alive, because they can’t propagate themselves can only exist as the product of something that did propagate itself.
Lex Fridman
(03:15:05)
Yeah, there’s a deeply compelling truth to that kind of way of looking at things, but I just hope that we don’t miss the giant cloud among us.
Nick Lane
(03:15:18)
I kind of hope that I’m wrong about a lot of this because I can’t say that my worldview is particularly uplifting, but in some sense it doesn’t matter if it’s uplifting or not, science is about what’s reality. What’s out there? Why is it this way? And I think there’s beauty in that too.

Depression

Lex Fridman
(03:15:39)
There’s beauty in darkness. You write about life and death sort of at the biological level. Does the question of suicide, why live? Does the question of why the human mind is capable of depression? Are you able to introspect that from a place of biology? Why our minds, why we humans can go to such dark places? Why can we commit suicide? Why can we go suffer? Suffer, period, but also suffer from a feeling of meaninglessness of going to a dark place that depression can take you? Is this a feature of life or is it a bug?
Nick Lane
(03:16:30)
I don’t know. If it’s a feature of life, then I suppose it would have to be true of other organisms as well. And I don’t know. We were talking about dogs earlier on and they can certainly be very sad and upset and may mooch for days after their owner died or something like that. So I suspect in some sense it’s a feature of biology. It is probably a feature of mortality. It’s probably a… But beyond all of that, I guess there’s two ways you could come at it. One of them would be to say, well, you can effectively do the math and come to the conclusion that it’s all pointless and that there’s really no point in me being here any longer. And maybe that’s true in the greater scheme of things. You can justify yourself in terms of society, but society will be gone soon enough as well. And you end up with a very bleak place just by logic.
Lex Fridman
(03:17:26)
In some sense, it’s surprising that we can find any meaning at all.
Nick Lane
(03:17:30)
Well, maybe this is where consciousness comes in that we have transient joy, but with transient joy, we have transient misery as well. And sometimes with everything in biology, getting the regulation right is practically impossible. You will always have a bell-shaped curve where some people unfortunately are at the joy end and some people are at the misery end, and that’s the way brains are wired. And I doubt there’s ever an escape from that. It’s the same with sex and everything else as well, where dealing with you can’t regulate it. So anything goes. It’s all part of biology.

Writing

Lex Fridman
(03:18:12)
Amen to that. Let me, on writing in your book, Power, Sex and Suicide. First of all, can I just read off the books you’ve written, if there’s any better titles and topics to be covered, I don’t know what they are. It makes me look forward to whatever you’re going to write next. I hope there’s things you write next. So first you wrote Oxygen: The Molecule that Made the World as we’ve talked about this idea of the role of oxygen in life on Earth. Then wait for it, Power, Sex, Suicide: Mitochondria and the Meaning of Life. Then Life Ascending: The 10 Great Inventions of Evolution. The Vital Question, the first book I’ve read of yours, the Vital Question: Why is Life the Way It Is? And the new book Transformer: The Deep Chemistry of Life and Death. In Power, sex and Suicide, you write about writing or about a lot of things, but I have a question about writing.

(03:19:13)
You write in the Hitchhiker’s Guide to the Galaxy Ford Prefect spends 15 years researching his revision to the Guide’s entry on the Earth, which originally read, “Harmless,” by the way, I would also as a side question, I would like to ask you what would be your summary of what Earth is.

(03:19:34)
But you write, “His long essay on the subject is edited down by the guide to read “Mostly Harmless.” I suspect that too many new editions suffer similar fate, if not through absurd editing decisions, at least through a lack of meaningful change in content. As it happens, nearly 15 years have passed since the first edition of Power, Sex, Suicide was published, and I am resisting the temptation to make any lame revisions. Some say that even Darwin lessened the power of his arguments in the Origin of species through his multiple revisions in which he dealt with criticisms and sometimes shifted his views in the wrong direction. I prefer my original to speak for itself even if it turns out to be wrong.”

(03:20:23)
Let me ask the question about writing, both your students in the academic setting, but also writing some of the most brilliant writings on science and humanity I’ve ever read. What’s the process of writing? How do you advise other humans? If you were to talk to young Darwin or the young you and just young anybody and give advice about how to write and how to write well about these big topics, what would you say?
Nick Lane
(03:20:57)
I suppose there’s a couple of points. One of them is, what’s the story? What do I want to know? What do I want to convey? Why does it matter to anybody? And very often the biggest, most interesting questions, the childlike questions are the one that actually everybody wants to ask, but daren’t quite do it in case they look stupid. And one of the nice things about being in science is the longer you’re in, the more you realize that everybody doesn’t know the answer to these questions and it’s not so stupid to ask them after all.

(03:21:36)
So trying to ask the questions that I would’ve been asking myself at the age of 15, 16 when I was really hungry to know about the world and didn’t know very much about it and wanted to go to the edge of what we know but be helped to get there. I don’t want too much terminology. And so I want someone to keep a clean eye on what the question is. Beyond that, I’ve wondered a lot about who am I writing for? And that was in the end, the only answer I had was myself at the age of 15 or 16. Because even if you just don’t know who’s reading it, but also where are they reading it? Are they reading it in the bath or in bed or on the metro or are they listening to an audiobook? Do you want to have a recapitulation every few pages because you read three pages at a time?

(03:22:41)
Or are you really irritated by that? You’re going to get criticism from people who are irritated by what you’re doing and you don’t know who they are or what you’re going to do that’s going to irritate people. And in the end, all you can do is just try and please yourself. And that means what are these big, fun, fascinating, big questions, and what do we know about it? And can I convey that? And I kind of learned in trying to write, first of all, say what we know. And I was shocked in the first couple of books how often I came up quickly against all the stuff we don’t know.

(03:23:21)
And if you’re trying to… I realized later on in supervising various physicists and mathematicians who are PhD students and I know their math is way beyond what I can do. But the process of trying to work out what are we actually going to model here, what’s going into this equation? It’s a very similar one to writing, what am I going to put on a page? What’s the simplest possible way I can encapsulate this idea so that I now have it as a unit that I can kind of see how it interacts with the other units? And you realize that, well, if this is like that and this is like this, then that can’t be true.

(03:23:58)
So you end up navigating your own path through this landscape and that can be thrilling because you don’t know where it’s going. And I’d like to think that that’s one of the reasons my books have worked for people because this sense of thrilling adventure ride, I don’t know where it’s going either.
Lex Fridman
(03:24:14)
So finding the simplest possible way to explain the things we know and the simplest possible way to explain the things we don’t know and the tension between those two, that’s where the story emerges. What about the edit? Do you find yourself to the point of this editing down to most harmless. To arrive at simplicity, do you find the edit is productive or does it destroy the magic that was originally there?
Nick Lane
(03:24:44)
No, I usually find… I think I’m perhaps a better editor than I’m a writer. I write and rewrite and rewrite and rewrite.
Lex Fridman
(03:24:51)
Put a bunch of crap on the page first and then see where the edit takes it.
Nick Lane
(03:24:56)
But then there’s the professional editors who come along as well. And in Transformer, the editor came back to me after I’d sent… Two months after I sent the first edition, he’d read the whole thing and he said, “The first two chapters prevent a formidable hurdle to the general reader, go and do something about it.” And that was the last thing I really wanted to hear.
Lex Fridman
(03:25:18)
But your editor sounds very eloquent in speech.
Nick Lane
(03:25:21)
Yeah. Well, this was an email, but I thought about it. The bottom line is he was right. And so I put the whole thing aside for about two months, spent the summer, this would’ve been I guess last summer, and then turned to it with full attention in about September or something and rewrote those chapters almost from scratch. I kept some of the material, but it took me a long time to process it, to work out what needs to change, where does it need to… I wasn’t writing in this time, how am I going to tell this story better so it’s more accessible and interesting. And in the end, I think it worked. It is still difficult, it’s still biochemistry, but he ended up saying, “Now it’s got a barreling energy to it.” And because he’d told me the truth the first time I decided to believe that he was telling me the truth the second time as well and was delighted.

Advice for young people

Lex Fridman
(03:26:13)
Could you give advice to young people in general, folks in high school, folks in college, how to take on some of the big questions you’ve taken on. Now you’ve done that in the space of biology and expand it out, how can they have a career they can be proud of or have a life they can be proud of?
Nick Lane
(03:26:35)
Gosh, that’s a big question.
Lex Fridman
(03:26:40)
I’m sure you’ve gathered some wisdom that you can impart.
Nick Lane
(03:26:46)
So the only advice that I actually ever give to my students is follow what you’re interested in. Because they’re often worried that if they make this decision now and do this course instead of that course, then they’re going to restrict their career opportunities. And there isn’t a career path in science. There is but isn’t. There’s a lot of competition. There’s a lot of death symbolically. So who survives? The people who survive are the people who care enough to still do it. And they’re very often the people who don’t worry too much about the future and are able to live in the present. If you do a PhD, you’ve competed hard to get onto the PhD, then you have to compete hard to get a post-doc job and you have the next bond maybe on another continent, and it’s only two years anyway, and there’s no guarantee you’re going to get a faculty position at the end of it.
Lex Fridman
(03:27:51)
And there’s always the next step to compete. If you get a faculty position, you get a tenure, and with tenure you go full professor and full professor, then you go to some kind of whatever the discipline is, there’s an award. If you’re in physics, you’re always competing for the Nobel Prize, there’s different awards and then eventually you’re all competing to… There’s always a competition.
Nick Lane
(03:28:12)
So there is no happiness. Happiness does not lie.
Lex Fridman
(03:28:15)
If you’re looking into the future, yes.
Nick Lane
(03:28:16)
And if what you’re caring about is a career, then it’s probably not the one for you. If though, you can put that aside. And I’ve also worked in industry for a brief period and I was made redundant twice, so I know that there’s no guarantee that you’ve got a career that way either.
Lex Fridman
(03:28:37)
Yes.
Nick Lane
(03:28:40)
So live in the moment and try and enjoy what you’re doing. And that means really go to the themes that you’re most interested in and try and follow them as well as you can. And that tends to pay back in surprising ways. I don’t know if you’ve found this as well, but I found that people will help you often, if they see some light shining in the eye and you are excited about their subject and just want to talk about it. And they know that their friend in California’s got a job coming up, they’ll say, “Go for this. This guy’s all right.” They’ll use the network to help you out if you really care. And you’re not going to have a job two years down the line, but what you really care about is what you’re doing now, then it doesn’t matter if you have a job in two years time or not. It’ll work itself out if you’ve got the light in your eye. And so that’s the only advice I can give. And most people probably drop out through that system because the fight is just not worth it for them.
Lex Fridman
(03:29:49)
Yeah, when you have the light in your eye, when you have the excitement for the thing, what happens is you start to surround yourself with others that are interested in that same thing that also have the light. If you really are rigorous about this, I think it takes effort to make…
Nick Lane
(03:30:07)
Oh, you’ve got to be obsessive. But if you’re doing what you really love doing, then it’s not work anymore, it’s what you do.
Lex Fridman
(03:30:13)
But I also mean the surrounding yourself with other people that are obsessed about the same thing because depending on-
Nick Lane
(03:30:19)
Oh, that takes some work as well.
Lex Fridman
(03:30:20)
Yes.
Nick Lane
(03:30:21)
And luck
Lex Fridman
(03:30:21)
Finding the right mentors, the collaborators. Because I think one of the problem with the PhD process is people are not careful enough in picking their mentors. Those are people… Mentors and colleagues and so on, those people are going to define the direction of your life, how much you love a thing. The power of just the few little conversations you have in the hallway is incredible. So you have to be a little bit careful in that sometimes you just get randomly almost assigned. Really pursue, I suppose, the subject as much as you pursue the people that do that subject. So both the whole dance of it.
Nick Lane
(03:31:09)
They kind of go together really.
Lex Fridman
(03:31:10)
Yeah, they do. They really do. But take that part seriously, and probably in the way you’re describing it, careful how you define success because-
Nick Lane
(03:31:22)
You’ll never find happiness in success. I think there’s a lovely quote from Robert Louis, Stevenson, I think, who said, “Nothing in life is so disenchanting as attainment.”
Lex Fridman
(03:31:33)
Yeah. So in some sense, the true definition of success is getting to do today, what you really enjoy doing, just what fills you with joy. And that’s ultimately success. Success isn’t the thing beyond the horizon, the big trophy, the financial-
Nick Lane
(03:31:54)
I think it’s as close as we can get to happiness. That’s not to say you’re full of joy all the time, but it’s as close as we can get to a sustained human happiness is by getting some fulfillment from what you’re doing on a daily basis. And if what you’re looking for is the world giving you the stamp of approval with a Nobel Prize or a fellowship or whatever it is, then I’ve known people like this who they’re eaten away by the anger, the kind of caustic resentment that they’ve not been awarded this prize that they deserve.
Lex Fridman
(03:32:30)
And the other way, if you put too much value into those kinds of prizes and you win them, I’ve gotten the chance to see that the more “successful” you are in that sense, the more you run the danger of growing ego so big that you don’t get to actually enjoy the beauty of this life. You start to believe that you figured it all out, as opposed to, I think what ultimately the most fun thing is being curious about everything around you, being constantly surprised and these little moments of discovery, of enjoying beauty in small and big ways all around you.

(03:33:12)
And I think the bigger your ego grows, the more you start to take yourself seriously, the less you’re able to enjoy that.
Nick Lane
(03:33:17)
Oh man, I couldn’t agree more.

Earth

Lex Fridman
(03:33:20)
So the summary from harmless to mostly harmless in Hitchhiker’s Guide to the Galaxy, how would you try to summarize Earth? And if you had to summarize the whole thing in a couple of sentences and maybe throw in meaning of life in there, why? Maybe is that a defining thing about humans that we care about the meaning of the whole thing? I wonder if that should be part of the… These creatures seem to be very lost.
Nick Lane
(03:33:58)
Yes. They’re always asking why. That’s my defining question is why. People used to made a joke, I have a small scar on my forehead from a climbing accident years ago, and the guy I was climbing with had dislodged a rock and he shouted something, he shouted, “Below,” I think, meaning that the rock was coming down and I hadn’t caught what he said. So I looked up and he went smashed straight on my forehead, and everybody around me took the piss saying, “He looked up to ask why.”
Lex Fridman
(03:34:32)
Yeah, but that’s a human imperative, that’s part of what it means to be human. Look up to the sky and ask why.
Nick Lane
(03:34:42)
So your question, define the Earth. I’m not sure I can do that. The first word that comes to mind is living, I wouldn’t like to say mostly living, but perhaps.
Lex Fridman
(03:34:57)
Mostly living. Well, it’s interesting because if you were to write The Hitchhiker’s Guide to the Galaxy, I suppose say our idea that we talked about, the bacteria is the most prominent form of life throughout the galaxy in the universe. I suppose that Earth would be kind of unique and would require-
Nick Lane
(03:35:22)
There’s abundance in that case.
Lex Fridman
(03:35:24)
Yeah.
Nick Lane
(03:35:25)
It’s profligate, it’s rich. It’s enormously, enormously living.
Lex Fridman
(03:35:29)
So how would you describe that it’s not bacteria, it’s…
Nick Lane
(03:35:36)
Eukaryotic.
Lex Fridman
(03:35:39)
Yeah.
Nick Lane
(03:35:39)
Well that’s the technical term, but it is basically.
Lex Fridman
(03:35:46)
Yeah. [inaudible 03:35:47]
Nick Lane
(03:35:47)
How would I describe that? I’ve actually really struggled with that term because the word… There’s few words quite as good as eukaryotic to put everybody off immediately. You start using words like that and maybe they’ll leave the room. Krebs cycle is another one that gets people to leave the room.
Lex Fridman
(03:36:06)
That’s interesting.
Nick Lane
(03:36:07)
So I’m trying to think, is there another word for eukaryotic that I can use? And really the only word that I’ve been able to use is complex, complex cells, complex life and so on. And that word, it serves one immediate purpose, which is to convey an impression, but then it means so many different things to everybody that actually is lost immediately. And so it is kind of…
Lex Fridman
(03:36:36)
Well, that’s a noticeable from the perspective of other planets, that is a noticeable face transition of complexity is the eukaryotic. What about the harmless and the mostly harmless? Is that kind of…
Nick Lane
(03:36:51)
Probably accurate on a universal kind of scale. I don’t think that humanity is in any danger of disturbing the universe at the moment.
Lex Fridman
(03:37:02)
At the moment, which is why the mostly, we don’t know. Depends what Elon is up to. Depends how many rockets. I think-
Nick Lane
(03:37:10)
It’ll be still even then a while, I think, before we disturb the fabric of time and space.
Lex Fridman
(03:37:17)
Was the aforementioned Andrej Karpathy. I think he summarized earth as a system where you hammer it with a bunch of photons. The input is like photons and the output is rockets. If you just-
Nick Lane
(03:37:37)
Well, that’s a hell of a lot of photons before it was a rocket.
Lex Fridman
(03:37:40)
But maybe in the span of the universe, it’s not that much time. And I do wonder what the future is, whether we’re just in the early beginnings of this Earth, which is important when you try to summarize it or we’re at the end where humans have finally gained the ability to destroy the entirety of this beautiful project we’ve got going on now with nuclear weapons, with engineered viruses, with all those kinds of things.
Nick Lane
(03:38:10)
Or just inadvertently through global warming and pollution and so on. We’re quite capable. We just need to pass the point.
Lex Fridman
(03:38:18)
[inaudible 03:38:18]
Nick Lane
(03:38:18)
I think we’re more likely to do it inadvertently than through a nuclear war, which could happen at any time. But my fear is we just don’t know where the tipping points are and we will kind of think we’re smart enough to fix the problem quickly if we really need to. I think that’s the overriding assumption that, “We’re all right for now. Maybe in 20 years time it’s going to be a calamitous problem, and then we’ll really need to put some serious mental power into fixing it.” Without seriously worrying that perhaps that is too late and that however brilliant we are, we miss the boat.
Lex Fridman
(03:38:59)
And just walk off the cliff. I don’t know. I have optimism in humans being clever descendants.
Nick Lane
(03:39:05)
Oh, I have no doubt that we can fix the problem, but it’s an urgent problem and we need to fix it pretty sharpish.
Lex Fridman
(03:39:14)
And-
Nick Lane
(03:39:14)
I do have doubts about whether politically we are capable of coming together enough to not just in any one country, but around the planet to… I know we can do it, but do we have the will? Do we have the vision to accomplish it?
Lex Fridman
(03:39:31)
That’s what makes this whole ride fun. We don’t know, not only do we not know if we can handle the crises before us, we don’t even know all the crises that are going to be before us in the next 20 years. The ones, I think, that will most likely challenge us in the 21st century are the ones we don’t even expect. People didn’t expect World War II at the end of World War I.
Nick Lane
(03:39:57)
Not at the end of World War I, but by the late 1920s, I think people were beginning to worry about it.
Lex Fridman
(03:40:03)
Yeah, no, there’s always people worrying about everything. So if you focus on the thing that-
Nick Lane
(03:40:08)
People worry about, yes.
Lex Fridman
(03:40:09)
Because there’s a million things people worry about and 99.99999% of them don’t come to be. Of course, the people that turn out to be right, they’ll say, I knew all along,” but that’s not an accurate way of knowing what you could have predicted. I think rationally speaking, you can worry about it, but nobody thought you could have another world war, the war to end all wars. Why would you have another war? And the idea of nuclear weapons just technologically is a very difficult thing to anticipate, to create a weapon that just jumps orders of magnitude and destructive capability. And of course, we can intuit all the things like engineered viruses, nanobots, artificial intelligence. Yes, all the different complicated global effects of global warming. So how that changes the allocation of resources, the flow of energy, the tension between countries, the military conflict between countries, the reallocation of power.

(03:41:06)
Then looking at the role of China and this whole thing with Russia and growing influence of Africa and the weird dynamics of Europe and then America falling apart through the political division, fueled by recommender systems through Twitter and Facebook. The whole beautiful mess is just fun. And I think there’s a lot of incredible engineers, incredible scientists, incredible human beings, that while everyone is bickering and so on online for the fun of it, on the weekends, they’re actually trying to build solutions. And those are the people that will create something beautiful. At least that’s the process of evolution. It all started with a Chuck Norris single cell organism that went out from the vents and was the parent to all of us. And for that guy or lady or both, I guess, is a big thank you. And I can’t wait to what happens next. And I’m glad there’s incredible humans writing and studying it like you are. Nick, it’s a huge honor that you would talk to me.
Nick Lane
(03:42:12)
This has been fantastic.
Lex Fridman
(03:42:13)
This is really amazing. I can’t wait to read what you write next. Thank you for existing and thank you for talking today.
Nick Lane
(03:42:24)
Thank you.
Lex Fridman
(03:42:26)
Thanks for listening to this conversation with Nick Lane. To support this podcast, please check out our sponsors in the description. And now let me leave you with some words from Steve Jobs. “I think the biggest innovations of the 21st century will be at the intersection of biology and technology. A new era is beginning.” Thank you for listening and hope to see you next time.