Author Archives: Lex Fridman

Transcript for Benjamin Netanyahu: Israel, Palestine, Power, Corruption, Hate, and Peace | Lex Fridman Podcast #389

This is a transcript of Lex Fridman Podcast #389 with Benjamin Netanyahu.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Introduction

Benjamin Netanyahu
(00:00:00)
We should never, and I never sit aside and say, oh, they’re just threatening to destroy us. They won’t do it. If somebody threatens to eliminate you as Iran is doing today, and as Hitler did then and people discounted it, well, if somebody threatens to annihilate us, take them seriously and act to prevent it early on. Don’t let them have the means to do so because that may be too late.
Lex Fridman
(00:00:26)
The following is a conversation with Benjamin Netanyahu, prime Minister of Israel, currently serving his sixth term in office. He’s one of the most influential, powerful, and controversial men in the world, leading a right-wing coalition government at the center of one of the most intense and long-lasting conflicts and crises in human history.

(00:00:47)
As we spoke, and as I speak now, large scale protests are breaking out all over Israel over this government’s proposed judicial reform that seeks to weaken the Supreme Court in a bold accumulation of power. Given the current intense political battles in Israel, our previous intention to speak for three hours was adjusted to one hour for the time being, but we agreed to speak again for much longer in the future. I will also interview people who harshly disagree with words spoken in this conversation. I will speak with other world leaders, with religious leaders, with historians and activists, and with people who have lived and have suffered through the pain of war, destruction and loss that stoke the fires of anger and hate in their heart.

(00:01:35)
For this, I will travel anywhere no matter how dangerous if there’s any chance, it may help add to understanding and love in the world. I believe in the power of conversation to do just this, to remind us of our common humanity. I know I’m under-qualified and under-skilled for these conversations, so I will often fall short and I will certainly get attacked, derided and slandered. But I will always turn the other cheek and use these attacks to learn to improve, and no matter what, never give into cynicism.

(00:02:12)
This life, this world of ours is too beautiful not to keep trying. Trying to do some good in whatever way each of us know how. I love you all.

(00:02:25)
This is The Lex Fridman Podcast. To support it please check out our sponsors in the description. And now, dear friends, here’s Benjamin Netanyahu.

Hate


(00:02:35)
You’re loved by many people here in Israel and in the world, but you’re also hated by many. In fact, I think you may be one of the most hated men in the world. So if there’s a young man or a young woman listening to this right now who have such hate in their heart, what can you say to them to one day turn that hate into love?
Benjamin Netanyahu
(00:02:58)
I disagree with the premise of your question. I think I’ve enjoyed a very broad support around the world. There are certain corners in which we have this animosity that you describe, and it sort of permeates in some of the newspapers and the news organs and so on in the United States, but it certainly doesn’t reflect the broad support that I have. I just gave an interview on an Iranian channel, 60 million viewers. I gave another one, just did a little video a few years ago, 25 million viewers from Iran. Certainly no hate there I have to tell you, not from the regime.

(00:03:45)
And when I go around the world and I’ve been around the world, people want to hear what we have to say. What I have to say as a leader of Israel whom they respect increasingly as a rising power in the world. So I disagree with that. And the most important thing that goes against what you said is the respect that we receive from the Arab world and the fact that we’ve made four historic peace agreements with Arab countries. And they made it with me, they didn’t make it with anyone else. And I respect them and they respect me and probably more to come. So I think the premise is wrong, that’s all.
Lex Fridman
(00:04:24)
Well, there’s a lot of love, yes. A lot of leaders are collaborating are –
Benjamin Netanyahu
(00:04:32)
Respect, I said not love.
Lex Fridman
(00:04:34)
Okay. All right. Well, it’s a spectrum, but there is people who don’t have good things to say about Israel, who do have hate in their heart for Israel.
Benjamin Netanyahu
(00:04:45)
Yeah.
Lex Fridman
(00:04:46)
And what can you say to those people?
Benjamin Netanyahu
(00:04:49)
Well, I think they don’t know very much. I think they’re guided by a lot of ignorance. They don’t know about Israel. They don’t know that Israel is a stellar democracy, that it happens to be one of the most advanced societies on the planet. That what Israel develops helps humanity in every field, in medicine, in agriculture and in the environment and telecoms and talk about AI in a minute. But changing the world for the better and spreading this among six continents.

(00:05:21)
We’ve sent rescue teams more than any other country in the world, and we’re one 10th of 1% of the world’s population. But when there’s an earthquake or a devastation in Haiti or in the Philippines, Israel is there. When there’s an devastating earthquake in Turkey, Israel was there. When there’s something in Nepal, Israel is there, and it’s the second country. It’s the second country after, in one case, India or after another case, the United States, Israel is there. Tiny Israel is a benefactor to all of humanity.
Lex Fridman
(00:05:57)
So you’re a student of history. If I can just linger on that philosophical notion of hate, that part of human nature. If you look at World War II, what do you learn from human nature, from the rise of the Third Reich and the rise of somebody like Hitler and the hate that permeates that?
Benjamin Netanyahu
(00:06:19)
Well, what I’ve learned is that you have to nip bad things in the bud. There’s a Latin term that says [foreign language 00:06:29], stop bad things when they’re small. And the deliberate hatred, the incitement of hatred against one community, it’s demonization, delegitimization that goes with it is a very dangerous thing.

(00:06:48)
And that happened in the case of the Jews. What started with the Jews soon spread to all of humanity. So what we’ve learned is that we should never, and I never sit aside and say, “Oh, they’re just threatening to destroy us. They won’t do it.” If somebody threatens to eliminate you as Iran is doing today, and as Hitler did then, and people discounted it, well, if somebody threatens to annihilate us, take them seriously and act to prevent it early on. Don’t let them have the means to do so because that may be too late.
Lex Fridman
(00:07:21)
So in those threats underlying that hatred, how much of it is anti-Zionism, and how much of it is anti-Semitism?
Benjamin Netanyahu
(00:07:31)
I don’t distinguish between the two. You can’t say, “Well, I’m, I’m okay with Jews, but I just don’t think there should be a Jewish state.” It’s like saying, “I’m not anti-American, I just don’t think there should be an America.” That’s basically what people are saying vis-a-vis anti-Semitism and anti-Zionism.

(00:07:49)
When you’re saying anti-Zionism you’re saying that Jewish people don’t have a right to have a state of their own. And that is a denial of a basic principle that I think completely unmasks what is involved here. Today anti-Semitism is anti-Zionism. Those who oppose the Jewish people oppose the Jewish state.

Judicial reform and protests

Lex Fridman
(00:08:15)
If we jump from human history to the current particular moment, there’s protests in Israel now about the proposed judicial reform that gives power to your government to override the Supreme Court. So the critics say that this gives too much power to you, virtually making you a dictator.
Benjamin Netanyahu
(00:08:35)
Yeah. Well, that’s ridiculous. The mere fact that you have so many demonstrations and protests, some dictatorship, huh? There’s a lot of democracy here, more rambunctious and more robust than just anywhere on the planet.
Lex Fridman
(00:08:52)
Can you still man the case that this may give too much power to the coalition government, to the prime minister, not just to you, but to those who follow?
Benjamin Netanyahu
(00:09:04)
No, I think that’s complete hogwash because I think there’s very few people who are demonstrating against this. Quite a few, quite many, don’t have an idea what is being discussed. They’re basically being sloganized. You can sloganized, you know something about not mass media right now, but the social network, you can basically feed deliberately with big data and big money, you can just feed slogans and get into people’s minds. I’m sure you don’t think I exaggerate, because you can tell me more about that.

(00:09:38)
And you can create mass mobilization based on these absurd slogans. So here’s where I come from and what we’re doing, what we’re trying to do, and what we’ve changed in what we’re trying to do. I’m a 19th century democrat in my, small D yes, in my views. That is I ask the question, “What is democracy?” So democracy is the will of the majority and the protection of the rights of, they call it the rights of the minority, but I say the rights of the individual.

(00:10:11)
So how do you balance the two? How do you avoid mobocracy? And how do you avoid dictatorship? The opposite side. The way you avoid it is something that was built essentially by British philosophers and French philosophers, but was encapsulated by the Founding Fathers of the United States. You create a balance between the three branches of government, the legislative, the executive, and the judiciary.

(00:10:41)
And this balance is what assures the balance between majority rights and individual rights. And you have to balance all of them. That balance was maintained in Israel in its first 50 years, and was gradually overtaken and basically broken by the most activist judicial court on the planet. That’s what happened here. And gradually over the last two, three decades, the court aggregated for itself the powers of the parliament and the executive. So we’re trying to bring it back into line. Bringing it back into line, into what is common in all parliamentary democracies and in the United States, doesn’t mean taking the pendulum from one side and bringing it to the other side.

(00:11:29)
We want checks and balances, not unrivaled power. Just as we said, we want an independent judiciary, but not an all powerful judiciary. That balance does not mean, bringing it back into line, doesn’t mean that you can have the parliament, our Knesset, override any decision that the Supreme Court does. So I pretty much early on said, after the judicial reform was introduced, “Get rid of the idea of sweeping override clause that would have, with 61 votes, that’s a majority of one, you can just nullify any Supreme Court decision, so let’s move it back into the center.” So that’s gone. And most of the criticism on the judicial reform was based on an unlimited override clause, which I’ve said is simply not going to happen. People are discussing something that already for six months does not exist.

(00:12:20)
The second point that we received criticism on was the structure of how do you choose Supreme Court judges? Okay, how do you choose them? And the critics of the reform are saying that the idea that elected officials should choose Supreme Court judges is the end of democracy. If that’s the case, the United States is not a democracy. Neither is France and neither are just, I don’t know, just about every democracy on the planet. So there is a view here that you can’t have the sordid hands of elected officials involved in the choosing of judges.

(00:12:59)
And in the Israeli system, the judicial activism went so far that effectively the sitting judges have an effective veto on choosing judges, which means that this is a self-selecting court that just perpetrates itself. And we want to correct that. Again, we want to correct it in a balanced way. And that’s basically what we’re trying to do. So I think there’s a lot of misinformation about that. We’re trying to bring Israeli democracy to where it was in its first 50 years. And it was a stellar democracy. It still is. Israel is a democracy, will remain a democracy, a vibrant democracy. And believe me, the fact that people are arguing and demonstrating in the streets and protesting is the best proof of that, and that’s how it’ll remain.
Lex Fridman
(00:13:49)
We spoke about tech companies offline, there’s a lot of tech companies nervous about this judicial reform. Can you speak to why large and small companies have a future in Israel?
Benjamin Netanyahu
(00:14:03)
Because Israel is a free market economy. I had something to do with that. I introduced dozens and dozens of free market reforms that made Israel move from $17,000 per capita income within very short time to $54,000. That’s nominal GDP per capita according to the IMF. And we’ve overtaken in that Japan, France, Britain, Germany.

(00:14:29)
And how did that happen? Because we unleashed the genius that we have and the initiative and the entrepreneurship that is latent in our population. And to do that, we had to create free markets. So we created that. So Israel has one of the most vibrant free market economies in the world. And the second thing we have is a permanent investment in conceptual products because we have a permanent investment in the military, in our security services, creating basically knowledge workers who then become knowledge entrepreneurs. And so we create this structure, and that’s not going to go away.

(00:15:09)
There’s been a decline in investments in high-tech globally. I think that’s driven by many factors. But the most important one is the interest rate, which I think will, it’ll fluctuate up and down. But Israel will remain a very attractive country because it produces so many knowledge workers in a knowledge based economy. And it’s changing so rapidly. The world is changing. You’re looking for the places that have innovation. The future belongs to those who innovate.

(00:15:41)
Israel is the preeminent innovation nation. It has few competitors. And if we would say, “All right, where do you have this close cross-disciplinary fermentation of various skills in areas?” I would say “It’s in Israel.” And I’ll tell you why. We used to be just telecoms because people went out of the military intelligence, RNSA, but that’s been now broad based. So you find it in medicine, you find it in biology, you find it in agritech, you find it everywhere. Everything is becoming technologized.

(00:16:17)
And in Israel, everybody is dealing in everything, and that’s a potent reservoir of talent that the world is not going to pass up. And in fact, it’s coming to us. We just had Nvidia coming here, and they decided to build a supercomputer in Israel. Wonder why? We’ve had Intel coming here and deciding now to invest $25 billion, just now, in a new plant in Israel. I wonder why? I don’t wonder why. They know why. Because the talent is here and the freedom is here. And it will remain so.

AI

Lex Fridman
(00:16:52)
You had a conversation about AI with Sam Altman of Open AI and with Elon Musk.
Benjamin Netanyahu
(00:16:57)
Yeah.
Lex Fridman
(00:16:57)
What was the content of that conversation? What’s your vision for this very highest of tech, which is artificial intelligence?
Benjamin Netanyahu
(00:17:09)
Well, first of all, I have a high regard for the people I talked to. And I understand that they understand things I don’t understand, and I don’t pretend to understand everything. But I do understand one thing. I understand that AI is developing at a geometric rate and mostly in political life and in life in general people don’t have an intuitive grasp of geometric growth. You understand things basically in linear increments. And the idea that you’re coming up a ski slope is very foreign to people. So they don’t understand it, and they’re naturally also sort of taken aback by it. Because what do you do? So I think there’s several conclusions from my conversations with them and from my other observations that I’ve been talking about for many years. I’m talking about the need-
Benjamin Netanyahu
(00:18:00)
… observations that I’ve been talking about for many years. I’m talking about the need to do this. Well, the first thing is this. There is no possibility of not entering AI with full force. Secondly, there is a need for regulation. Third, it’s not clear there will be global regulation. Fourth, it’s not clear where it ends up. I certainly cannot say that. Now, you might say, “Does it come to control us?” Okay, that’s a question. Does it come to control us? I don’t know the answer to that. I think that, as one observation that I had from these conversations is if it does come to control us, that’s probably the only chance of having universal regulation, because I don’t see anyone deciding to avoid the race and cooperate unless you have that threat. Doesn’t mean you can’t regulate AI within countries even without that understanding, but it does mean that there’s a limit to regulation because every country will want to make sure that it doesn’t give up competitive advantage if there is no universal regulation.

(00:19:19)
I think that right now, just as 10 years ago, I read a novel. I don’t read novels, but I was forced to read one by a scientific advisor. I read history, I read about economics, I read about technology. I just don’t read novels. In this, I follow Churchill. He said, “Fact is better than fiction.” Well, this fiction would become fact. It was a book, it was a novel about a Chinese/American future cyber war. I read the book in one sitting, called in a team of experts, and I said, “All right, let’s turn Israel into one of the world’s five cyber powers and let’s do it very quickly.” And we did actually. We did exactly that. I think AI is bigger than that and related to that, because it’ll affect … Well, cyber affects everything, but AI will affect it even more fundamentally. And the joining of the two could be very powerful.

(00:20:19)
So I think in Israel, we have to do it anyway for security reasons and we’re doing it. But I think, what about our databases that are already very robust on the medical records of 98% of our population? Why don’t we stick a genetic database on that? Why don’t we do other things that could bring what are seemingly magical cures and drugs and medical instruments for that? That’s one possibility. We have it, as I said, in every single field. The conclusion is this. We have to move on AI. We are moving on AI, just as we moved on cyber, and I think Israel will be one of the leading AI powers in the world. The questions I don’t have an answer to is, where does it go? How much does it chew up on jobs?

(00:21:19)
There’s an assumption that I’m not sure is true, that the two big previous revolutions in the human condition, namely the agricultural revolution and the industrial revolution, definitely produced more jobs than they consumed. That is not obvious to me at all. I mean, I could see new jobs creating, and yes, I have that comforting statement, but it’s not quite true, because I think on balance, they’ll probably consume more jobs, many more jobs than they’ll create.
Lex Fridman
(00:21:58)
At least in the short term. And we don’t know about the long term.
Benjamin Netanyahu
(00:22:01)
No, I don’t know about the long term, but I used to have the comfort being a free market guy. I always said, “We’re going to produce more jobs by, I don’t know, limiting certain government jobs.” We’re actually putting out in the market, will create more jobs, which obviously happened. We had one telecom company, a government company. When I said, “We’re going to create competition,” they said, “You’re going to run us out. We’re not going to have more workers.” They had 13,000 workers. They went down to seven, but we created another 40,000 in the other companies. So, that was a comforting thought. I always knew that was true.

(00:22:36)
Not only that. I also knew that wealth would spread by opening up the markets, completely opposite to the socialist and semi-socialist creed that they had here. They said, “You’re going to make the rich richer and the poor poorer.” No. And made everyone richer, and actually the people who entered the job market because of the reforms we did, actually became a lot richer on the lower ladders of the socioeconomic measure.

(00:23:05)
But here’s the point, I don’t know. I don’t know that we will not have what Elon Musk calls the end of scarcity. So you’ll have the end of scarcity. You’ll have enormous productivity. Very few people are producing enormous added value. You’re going to have to tax that to pass it to the others. You’re going to have to do that. That’s a political question. I’m not sure how we answer that. What if you tax and somebody else doesn’t tax? You’re going to get everybody to go there. That’s an international issue that we constantly have to deal with.

(00:23:42)
And the second question you have is, suppose you solve that problem and you deliver money to those who are not involved in the AI economy, what do they do? The first question you ask somebody whom you just met after the polite exchanges is, what do you do? Well, people define themselves by their profession. It’s going to be difficult if you don’t have a profession. People will spend more time self-searching, more time in the arts, more time in leisure. I understand that. If I have to bet, it will annihilate many more jobs than it will create and it’ll force a structural change in our economics, in our economic models, and in our politics. And I’m not sure where it’s going to go.
Lex Fridman
(00:24:40)
And that’s something we have to respond to at the nation level and just as a human civilization, both the threat of AI to just us as a human species and then the effect on the jobs. And like you said, cybersecurity.
Benjamin Netanyahu
(00:24:55)
What do you think? You think we’re going to lose control?
Lex Fridman
(00:25:00)
No, first of all, I do believe, maybe naively, that it will create more jobs than it takes.
Benjamin Netanyahu
(00:25:05)
Write that down and we’ll check it.
Lex Fridman
(00:25:07)
It’s on record.
Benjamin Netanyahu
(00:25:09)
We don’t say, “We’ll check it after our lifetime.” No, we’ll see it in a few years.
Lex Fridman
(00:25:12)
We’ll see it in a few years. I’m really concerned about cybersecurity and the nature of how that changes with the power of AI. In terms of existential threats, I think there will be so much threats that aren’t existential along the way that that’s the thing I’m mostly concerned about, versus AI taking complete control and superseding the human species. Although that is something you should consider seriously because of the exponential growth of its capabilities.
Benjamin Netanyahu
(00:25:43)
Yeah, it’s exactly the exponential growth, which we understand is before us, but we don’t really … It’s very hard to project forward.
Lex Fridman
(00:25:51)
To really understand.
Benjamin Netanyahu
(00:25:52)
That’s right. Exactly right. So I deal with what I can and where I can affect something. I tend not to worry about things I don’t control, because there’s at a certain point, there’s no point. I mean, you have to decide what you’re spending your time on. So in practical terms, I think we’ll make Israel a formidable AI power. We understand the limitation of skill, computing power and other things. But I think within those limits, I think we can make here this miracle that we did in many other things. We do more with less. I don’t care if it’s the production of water or the production of energy or the production of knowledge or the production of cyber capabilities, defense and other, we just do more with less. And I think in AI, we’re going to do a lot more with a relatively small but highly gifted population. Very gifted.

Competition

Lex Fridman
(00:26:53)
So taking a small tangent, as we talked about offline, you have a background in TaeKwonDo?
Benjamin Netanyahu
(00:27:00)
Oh, yeah.
Lex Fridman
(00:27:01)
We mentioned Elon Musk. I’ve trained with both. Just as a quick question, who are you betting on in a fight?
Benjamin Netanyahu
(00:27:08)
Well, I refuse to answer that. I will say this.
Lex Fridman
(00:27:13)
Such a politician, you are.
Benjamin Netanyahu
(00:27:14)
Yeah, of course. Here, I’m a politician. I’m openly telling you that I’m dodging the question. But I’ll say this. Actually, I spent five years in our special forces in the military, and we barely spent a minute on martial arts. I actually learned TaeKwonDo later when I came to … It wasn’t even at MIT. At MIT, I think I did karate. But when I came to the UN, I had a martial arts expert who taught me TaeKwonDo, which was interesting. Now, the question you really have to ask is, why did we learn martial arts in this special elite unit? And the answer is, there’s no point. If you saw Indiana Jones, there’s no point. You just pull the trigger. That’s simple. Now, I don’t expect anyone to pull the trigger on this combat, and I’m sure you’ll make sure that doesn’t happen.
Lex Fridman
(00:28:15)
Yeah. I mean, martial arts is bigger than just combat. It’s this journey of humility.
Benjamin Netanyahu
(00:28:21)
Oh, sure.
Lex Fridman
(00:28:23)
It’s an art form. It truly is an art. But it’s fascinating that these two figures in tech are facing each other. I won’t ask the question of who you would face and how you would do, but …
Benjamin Netanyahu
(00:28:34)
Well, I’m facing opponents all the time.
Lex Fridman
(00:28:36)
All the time?
Benjamin Netanyahu
(00:28:37)
Yeah, that’s part of life.
Lex Fridman
(00:28:41)
Not yet.
Benjamin Netanyahu
(00:28:41)
I’m not sure about that.
Lex Fridman
(00:28:42)
Are you announcing any fights?
Benjamin Netanyahu
(00:28:44)
No, no. Part of life is competition. The only time competition ends is death. But political life, economic life, cultural life is engaged continuously in creativity and competition. The problem I have with that is, as I mentioned earlier just before we began the podcast, is that at a certain point, you want to put barriers to monopoly. And if you’re a really able competitor, you’re going to create a monopoly. That’s what Peter Till says is a natural course of things. It’s what I learned basically in the Boston Consulting Group. If you are a very able competitor, you’ll create scale advantages that gives you the ability to lock out your competition. And as a prime minister, I want to assure that there is competition in the markets, so you have to limit this competitive power at a certain point, and that becomes increasingly hard in a world where everything is intermixed.

(00:29:49)
Where do you define market segments? Where do you define monopoly? How do you do that? That, actually conceptually, I find very challenging, because of all the dozens of economic reforms that I’ve made, the most difficult part is the conceptual part. Once you’ve ironed it out and you say, “Here’s what I want to do. Here’s the right thing to do,” then you have a practical problem of overcoming union resistance, political resistance, press calumny, opponents from this or that corner. That’s a practical matter. But if you have it conceptually defined, you can move ahead to reform economies or reform education or reform transportation. Fine.

(00:30:38)
In the question of the growing power of large companies, big tech companies to monopolize the markets because they’re better at it, they provide a service, they provide it at a lower cost, at rapidly declining cost. Where do you stop? Where do you stop monopoly power is a crucial question because it also becomes now a political question. If you amass enormous amount of economic power, which is information power, that also monopolizes the political process. These are real questions that are not obvious. I don’t have an obvious answer because as I said, as a 19th century Democrat, these are questions of the 21st century, which people should begin to think. Do you have a solution to that?
Lex Fridman
(00:31:27)
The solution of monopolies growing arbitrarily-
Benjamin Netanyahu
(00:31:30)
Yeah.
Lex Fridman
(00:31:31)
… unstoppably in power?
Benjamin Netanyahu
(00:31:33)
In economic power, and therefore in political power.
Lex Fridman
(00:31:36)
I mean, some of that is regulation, some of that is competition.
Benjamin Netanyahu
(00:31:40)
Do you know where to draw the line? It’s not breaking up AT&T. It’s not that simple.
Lex Fridman
(00:31:49)
Well, I believe in the power of competition, that there will always be somebody that challenges the big guys, especially in the space of AI. The more open source movements are taking hold, the more the little guy can become the big guy.
Benjamin Netanyahu
(00:32:02)
So you’re saying basically the regulatory instrument is the market?
Lex Fridman
(00:32:09)
In large part, in most part, that’s the hope. Maybe I’m a dreamer.
Benjamin Netanyahu
(00:32:13)
That’s been in many ways my policy up to now, that the best regulator is the market. The best regulator in economic activity is the market and the best regulator in political matters is the political market. That’s called elections. That’s what regulates. You have a lousy government and people make lousy decisions, well, you don’t need the wise men raised above the masses to decide what is good and what is bad. Let the masses decide. Let them vote every four years or whatever, and they throw you out.

(00:32:54)
By the way, it happened to me. There’s life after political death. There’s actually political life. I was reelected five or six times, and this is my sixth term. So I believe in that. I’m not sure that in economic matters, in the geometric growth of tech companies, that you’ll always have the little guy, the nimble mammal, that will come out and slay the dinosaurs or overcome the dinosaurs, which is essentially what you said.
Lex Fridman
(00:33:25)
Yeah, I wouldn’t count out the little guy.
Benjamin Netanyahu
(00:33:27)
You wouldn’t count out the little?
Lex Fridman
(00:33:28)
No.
Benjamin Netanyahu
(00:33:29)
Well, I hope you’re right.

Power and corruption

Lex Fridman
(00:33:31)
Well, let me ask you about this market of politics. So you have served six terms as prime minister over 15 years in power. Let me ask you again, human nature. Do you worry about the corrupting nature of power on you as a leader, on you as a man?
Benjamin Netanyahu
(00:33:48)
Not at all. Because I think that, again, the thing that drives me is nothing but the mission that I took to assure the survival and thriving of the Jewish state. That is, its economic prosperity, but its security and its ability to achieve peace with our neighbors. And I’m committed to it. I think there are many things that have been done. There are a few big things that I can still do, but it doesn’t only depend on my sense of mission. It depends on the market, as we say. It depends really on the will of the Israeli voters. And the Israeli voters have decided to vote for me again and again, even though I wield no power in the press, no power in many quarters here and so on, nothing. I mean, probably, I’m going to be very soon the longest serving prime minister in the last half century in the Western democracies. But that’s not because I amassed great political power in any of the institutions.

(00:34:56)
I remember I had a conversation with Silvio Berlusconi, who recently died, and he said to me about, I don’t know, 15 years ago, something like that, he said, “So Bibi, how many of Israel’s television stations do you have?” And I said, “None.” He said, “You have none?”
Lex Fridman
(00:35:23)
Do you have?
Benjamin Netanyahu
(00:35:24)
“Do you have?” I said, “None. I have two.” He said, “No, no. What, you mean you don’t have any that you control?” I said, “Not only do I have none that I control, they’re all against me.” So he says, “So how do you win elections with both hands tied behind your back?” And I said, “The hard way.” That’s why I have the largest party, but I don’t have many more seats than I would have if I had a sympathetic voice in the media. And Israel until recently, was dominated completely by one side of the political spectrum that often vilified me, not me, because they viewed-
Benjamin Netanyahu
(00:36:01)
… vilified me, not me, because they viewed me as representing basically the conservative voices in Israel that are majority. And so the idea that I’m an omnipotent, authoritarian dictator is ridiculous. I would say I’m not merely a champion of democracy and democratization. I believe ultimately the decision is with the voters and the voters, even though they have constant press attacks, they’ve chosen to put me back in. So I don’t believe in this thing of amassing the corrupting power of if you don’t have elections. If you control the means of influencing the voters, I understand what you’re saying, but in my case, it’s exact opposite. I have to constantly go in elections, constantly with a disadvantage that the major media outlets are very violently sometimes against me, but it’s fine. And I keep on winning. So I don’t know what you’re talking about. I would say the concentration of power lies elsewhere, not here.
Lex Fridman
(00:37:15)
Well, you have been involved in several corruption cases. How much corruption is there in Israel and how do you fight it in your own party and in Israel?
Benjamin Netanyahu
(00:37:24)
Well, you should ask a different question. What’s happened to these cases? These cases basically are collapsing before our eyes, there was recently an event in which the three judges in my case, called in the prosecution and said, “Your flagship, the so-called bribery charges is gone, doesn’t exist,” before a single defense witness was called. And it sort of tells you that this thing is evaporating. It’s quite astounding even that I have to say, was covered even by the mainstream press in Israel because it’s such an earthquake. So a lot of these charges are not a lot. These charges will prove to be nothing. I always said, “Listen, I stand before the legal process.” I don’t claim that I’m exempt from it in any way. On the contrary, I think the truth will come out and it’s coming out. And we see that not only that, but with other things.

(00:38:28)
So I think it’s kind of instructive that no politician has been more vilified. None has been put to such a, what is it? About a quarter of a billion shekels were used to scrutinize me, scour my bank accounts, sending people to the Philippines, into Mexico, into Europe, into America, and everybody using spyware, the most advanced spyware on the planet against my associates, blackmailing witnesses, telling them, “Think about your family, think about your wife. You better tell us what you want.” All that is coming out of the trial. So I would say that most people now are not asking, are no longer asking, including my opponents. It’s sort of trickling in as the stuff comes out. People are not saying, “What did Netanyahu do, because apparently he did nothing?” “What was done to him?” is something that people ask.

(00:39:31)
“What was done to him? What was done to our democracy, what was done in the attempt to put down somebody who keeps winning elections, despite the handicaps that I described? Maybe we can nail him by framing him.” And the one thing I can say about this court trial is that things are coming up and that’s very good, just objective things are coming out and changing the picture. So I would say the attempt to brand me as corrupt is falling on its face. But the thing that is being uncovered in the trial, such as the use of spyware on a politician, a politician’s surroundings to try to shake them down in investigations, put them in flea-ridden cells for 21 days. Invite their 84 year old mother to investigations without cause, bringing in their mistresses in the corridor, shaking them down, that’s what people are asking. That corruption is what they want corrected.

Peace

Lex Fridman
(00:40:46)
What is the top obstacle to peaceful coexistence of Israelis and Palestinians? Let’s talk about the big question of peace in this part of the world.
Benjamin Netanyahu
(00:40:55)
Well, I think the reason you have the persistence of the Palestinian Israeli conflict, which goes back about a century, is the persistent Palestinian refusal to recognize a Jewish state, a nation state for the Jewish people in any boundary. That’s why they opposed the establishment of the state of Israel before we had a state. Now that’s why they’ve opposed it after we had a state. They opposed it when we didn’t have Judea and Samaria, the West Bank in our hands and Gaza, and they oppose it after we have it. It doesn’t make a difference. It’s basically their persistent refusal to recognize a Jewish state in any boundaries. And I think that their tragedy is that they’ve been commandeered for a century by leadership that refused to compromise with the idea of Zionism, namely that the Jews deserve a state in this part of the world.

(00:41:49)
The territorial dispute is something else. You have a territorial dispute if you say, “Okay, you are living on this side, we’re living on that side. Let’s decide where the border is and so on.” That’s not what the argument is. The Palestinian society, which is itself fragmented, but all the factions agree, there shouldn’t be a Jewish state anywhere. They just disagree between Hamas that says, “Oh, well you should have it. We should get rid of it with terror.” And the others who say, “We know we should also use political means to dissolve it.” So that is the problem.
Lex Fridman
(00:42:28)
So even as part of a two-state solution, they’re still against the idea.
Benjamin Netanyahu
(00:42:33)
Well, they don’t want a state next to Israel. They want a state instead of Israel. And they say, “If we get a state, we’ll use it as a springboard to destroy the smaller Israeli state.” Which is what happened when Israel unilaterally walked out of Gaza and effectively established a Hamas state there. They didn’t say, “Oh good, now we have our own territory, our own state. Israel is no longer there. Let’s build peace. Let’s build economic projects. Let’s enfranchise our people.” No, they turned it basically into a terror bastion from which they fired 10,000 rockets into Israel. When Israel left Lebanon because we had terrorist attacks from there, then we had Lebanon taken over by Hezbollah, a terrorist organization that seeks to destroy Israel. And therefore every time we just walked out, what we got was not peace, we didn’t give territory for peace, we got territory for terror. That’s what we had.

(00:43:35)
And that’s what would happen as long as the reigning ideology says, “We don’t want Israel in any border.” So the idea of two states assumes that you’d have on the other side a state that wants to live in peace and not one that will be overtaken by Iran in its proxies in two seconds and become a base to destroy Israel. And therefore, I think that most Israelis today, if you ask them, they’d say it’s not going to work in that concept, so what do you do with the Palestinians? They’re still there. And unlike them, I don’t want to throw them out. They’re going to be living here and we’re going to be living here in an area, which is by the way, just to understand the area, the entire area of so-called West Bank and Israel is the width of the Washington Beltway, more or less.

(00:44:26)
Just a little more, not much more. You can’t really divide it up. You can’t say, “Well, you’re going to fly in. Who controls the airspace?” Well, it takes you about two and a half minutes to cross it with a regular 747. With a fighter plane it takes you a minute and a half, okay? So how are you going to divide the airspace? Well, you’re not going to divide it. Israel’s going to control that airspace and the electromagnetic space and so on. So security has to be in the hands of Israel. My view of how you solve this problem is a simple principle. The Palestinians should have all the powers to govern themselves and none of the powers to threaten Israel, which basically means that the responsibility for overall security remains with Israel. And from a practical point of view, we’ve seen that every time that Israel leaves a territory and takes its security forces out of an area, it immediately is overtaken by Hamas or Hezbollah or Jihadist who basically are committed to the destruction of Israel and also bring misery to the Palestinians or Arab subjects.

(00:45:40)
So I think that principle is less than perfect sovereignty because you’re taking a certain amount of sovereign powers, especially security away. But I think it’s the only practical solution. So people say, “Ah, but it’s not a perfect state.” I say, “Okay, call it what you will. Call it, I don’t know, limited sovereignty. Call it the autonomy plus. Call it whatever you want to call it.” But that’s the reality. And right now, if you ask Israelis across the political spectrum, except the very hard left, most Israelis agree with that. They don’t really debate it.
Lex Fridman
(00:46:14)
So a two-state solution where Israel controls the security of the entire region.
Benjamin Netanyahu
(00:46:18)
We don’t call it quite that. I mean there are different names, but the idea is yes, Israel controls security in the, is the entire area. It’s this tiny area between the Jordan River and the sea. I mean it’s like, you can walk it in not one afternoon. If you’re really fit, you can do it in a day, less than a day. I did.
Lex Fridman
(00:46:39)
So the expansion of settlements in the West Bank has been a top priority for this new government. So people may harshly criticize this as contributing to escalating the Israel-Palestine tensions. Can you understand that perspective, that this expansion of settlements is not good for this two-state solution?
Benjamin Netanyahu
(00:46:59)
Yeah, I can understand what they’re saying, and they don’t understand why they’re wrong. First, most Israelis who live in Judea, Samaria live in urban blocks, and that accounts for about 90% of the population. And everybody recognizes that those urban blocks are going to be part of Israel in any future arrangement. So they’re really arguing about something that has already been decided and agreed upon, really by Americans, even by Arabs, many Arabs, they don’t think that Israel is going to dismantle these blocks. You look outside the window here, and within about a kilometer or a mile from here, as you have Jerusalem, half of Jerusalem grew naturally beyond the old 1967 border. So you’re not going to dismantle half of Jerusalem. That’s not going to happen. And most people don’t expect that. Then you have the other 10% scattered in tiny, small communities, and people say, “Well, you’re going to have to take them out.” Why?

(00:48:05)
Remember that in pre-1967 Israel, we have over a million and a half Arabs here. We don’t say, “Oh, Israel has to be ethnically cleansed from its Arab citizens in order to have peace.” Of course not. Jews can live among Arabs, and Arabs can live among Jews. And what is being advanced by those people who say that we can’t live in our ancestral homeland in these disputed areas. Nobody says that this is Palestinian areas and nobody says that these are Israeli areas. We claim them, they claim them. We’ve only been attached to this land for oh, 3,500 years. But it’s a dispute, I agree. But I don’t agree that we should throw out the Arabs. And I don’t think that they should throw out the Jews. And if somebody said to you, “The only way we’re going to have peace with Israel is to have an ethnically cleansed Palestinian entity,” that’s outrageous.

(00:49:00)
If you said you shouldn’t have Jews living in, I don’t know, in suburbs of London or New York and so on, I don’t think that will play too well. The world is actually advancing a solution that says that Jews cannot live among Arabs, and Arabs cannot live among Jews. I don’t think that’s the right way to do it. And I think there’s a solution out there, but I don’t think we’re going to get to it, which is less than perfect sovereignty, which involves Israeli security, maintained for the entire territory by Israel, which involves not rooting out anybody. Not kicking out, uprooting Arabs or Palestinians. They’re going to live in enclaves in sovereign Israel and we’re going to live in probably in enclaves there, probably through transportation continuity as opposed to territorial continuity. For example, you can have tunnels and overpasses and so on that connect the various communities.

(00:49:57)
We’re doing that right now, and it actually works. I think there is a solution to this. It’s not the perfect world that people think of because that model I think doesn’t apply here. If it applies elsewhere, it’s a question. I don’t think so. But I think there’s one other thing, and that’s the main thing that I’ve been involved in. People said, “If you don’t solve the Palestinian problem, you’re not going to get to the Arab world. You’re not going to have peace with the Arab world.” Remember, the Palestinians are about 2% of the Arab world, and the other 98%, you’re not going to make peace with them. And that’s our goal.

(00:50:39)
And for a long time, people accepted that. After the initial peace treaties with Egypt, with Prime Minister Begin of the Likud and President Sadat of Egypt, and then with Jordan between Prime Minister Rabin and King Hussein. For a quarter of a century we didn’t have any more peace treaties because people said, “You got to go through the Palestinians” and the Palestinians, they don’t want a solution of the kind that I described or any kind except the one that involved the dissolution of the state of Israel.

(00:51:08)
So we could wait another half century. And I said, “No, I don’t think that we should accept the premise that we have to wait for the Palestinians because we’ll have to wait forever.” So I decided to do it differently. I decided to go directly to the Arab capitals and to make the historic Abraham Accords and essentially reversing the equation, not a peace process that goes inside out, but outside in. And we went directly to these countries and forged these breakthrough peace accords with the United Arab Emirates, with Bahrain, with Morocco and with Sudan. And we’re now trying to expand that in a quantum leap with Saudi Arabia.
Lex Fridman
(00:51:56)
What does it take to do that with Saudi Arabia, with the Saudi Crown Prince Mohammed bin Salman.
Benjamin Netanyahu
(00:52:01)
I’m a student of history, and I read a lot of history, and I read that in the Versailles discussions after World War I, President Woodrow Wilson said, “I believe in open covenants openly arrived at.” I have my correction. I believed in open covenants secretly arrived at so we’re not going to advance a Saudi-Israeli peace by having it publicly discussed. And in any case, it’s a decision of the Saudis if they want to do it, but there’s obviously a mutual interest. So here’s my view, if we try to wait for the 2% in order to get to the 98%, we’re going to fail and we have failed. If we go to the 98%, we have a much greater chance of persuading the 2%. You know why? Because the 2% the Palestinian hope to vanquish the state of Israel and not make peace with it, is based, among other things, on the assumption that eventually the 98%, the rest of the Arab world, will kick in and destroy the Jewish state, help them dissolve or destroy the Jewish state.

(00:53:08)
When that hope is taken away, then you begin to have a turn to the realistic solutions of coexistence. By the way, they’ll require compromise on the Israeli side too. And then I’m perfectly cognizant of that and willing to do that. But I think a realistic compromise will be struck much more readily when the conflict between Israel and the Arab states, the Arab world, is effectively solved. And I think we’re on that path. It was a conceptual change just like I’ve been involved in a few, I told you the conceptual battle is always the most difficult one. And I had to fight this battle to convert a semi-socialist state into a free market capitalist state. And I have to say that most people today recognize the power of competition and the benefits of free markets. So we also had to fight this battle-
Benjamin Netanyahu
(00:54:00)
… free markets. So we also had to fight this battle that said you have to go through the Palestinian straight, S-T-R-A-I-T, to get to the other places. There’s no way to avoid this, you have to go through this impassable pass. And I think that now people are recognizing that we’ll go around it and probably circle back. And that, I think, actually gives hope not only to have an Arab-Israeli peace, but circling back in Israeli-Palestinian peace. And obviously this is not something that you find in the soundbites and so on, but in the popular discussion of the press. But that idea is permeating and I think it’s the right idea, because I think it’s the only one that will work.
Lex Fridman
(00:54:50)
So expanding the circle of peace, just to linger on that requires what? Secretly talking man-to-man, human-to-human, to leaders of other nations and-
Benjamin Netanyahu
(00:55:03)
Theoretically, you’re right.

War in Ukraine

Lex Fridman
(00:55:04)
Theoretically. Okay. Well, let me ask you another theoretical question on this circle of peace. As a student of history, looking at the ideas of war and peace, what do you think can achieve peace in the war in Ukraine looking at another part of the world? If you consider the fight for peace in this part of the world, how can you apply that to that other part of the world between Russia and Ukraine now?
Benjamin Netanyahu
(00:55:38)
I think it’s one of the savage horrors of history and one of the great tragedies that is occurring. Let me say in advance that if I have any opportunity to use my contacts to help bring about an end to this tragedy, I’ll do so. I know both leaders, but I don’t just jump in and assume if there’s be a desire at a certain point because the conditions have created the possibility of helping stop this carnage, then I’ll do it. And that’s why I choose my words carefully, because I think that may be the best thing that I could do. Look, I think what you see in Ukraine is what happens if you have territorial designs on a territory by a country that has nuclear weapons. And that, to me, you see the change in the equation. Now, I think that people are loathed to use nuclear weapons, and I’m not sure that I would think that the Russian side would use them with happy abandon.

(00:56:59)
I don’t think that’s the question, but you see how the whole configuration changes when that happens. So you have to be very careful on how you resolve this conflict. So it doesn’t… well, it doesn’t go off the rails, so to speak. That’s, by the way, the corollaries here. We don’t want Iran, which is an aggressive force with just aggressive ideology of dominating first the Muslim world, and then eliminating Israel, and then becoming a global force, having nuclear weapons. It’s totally different when they don’t have it than when they do have it. And that’s why one of my main goals has been to prevent Iran from having the means of mass destruction, which will be used, atomic bombs, which they openly say will be used against us. And you can understand that. How to bring about an end to Ukraine? I have my ideas. I don’t think that’s worthwhile discussing them now because they might be required later on.
Lex Fridman
(00:58:06)
Do you believe in the power of conversation? Since you have contacts with Volodymyr Zelenskyy and Vladimir Putin, just leaders sitting in a room and discussing how the end of war can be brought about?
Benjamin Netanyahu
(00:58:19)
I think it’s a combination of that, but I think it’s the question of interest and whether you have to get both sides to a point where they think that that conversation would lead to something useful. I don’t think they’re there right now.
Lex Fridman
(00:58:37)
What part of this is just basic human ego, stubbornness all of this between leaders, which is why I bring up the power of conversation, of sitting in a room realizing we’re human beings, and then there’s a history that connects Ukraine and Russia?
Benjamin Netanyahu
(00:58:52)
I don’t think they’re in a position to enter a room right now, realistically. I mean, you can posit that it would be good if that could happen, but entering the room is sometimes more complicated than what happens in the room. And there’s a lot of pre-negotiations on the negotiation, then you negotiate endlessly on the negotiation. They’re not even there.
Lex Fridman
(00:59:11)
It took a lot of work for you to get to a handshake in the past.

Abraham Accords

Benjamin Netanyahu
(00:59:15)
It’s an interesting question. How did the peace, the Abraham Accords, how did that begin? We had decades. We had 70 years or 65 years where these people would not meet openly or even secretly with an Israeli leader. Yeah, we had the Mossad making contacts with him all the time, and so on, but how do we break the ice to the top level of leadership? Well, we broke the ice because I took a very strong stance against Iran, and the Gulf states understood that Iran is a formidable danger to them, so we had a common interest. And the second thing is that because of the economic reforms that we had produced in Israel, Israel became a technological powerhouse. And that could help their nations, not only… in terms of anything, of just bettering the life of their peoples.

(01:00:12)
And the combination of the desire to have some kind of protection against Iran or some kind of cooperation against Iran and civilian economic cooperation came to a head when I gave a speech in the American Congress, which I didn’t do lightheartedly, I had to decide to challenge a sitting American president and on the so-called Iranian deal, which I thought would pave Iran’s path with gold to be an effective nuclear power. That’s what would happen. So I went there. And in the course of giving that speech before the joint session of Congress, our delegation received calls from Gulf states who said, “We can’t believe what your prime minister is doing. He’s challenging the President of the United States.” Well, I had no choice because I thought my country’s own existence was imperiled. And remember, we always understand through changing administrations that America under… no matter what leadership is always the irreplaceable and indispensable ally of Israel and will always remain that we can have arguments as we have, but in the family, as we say in [foreign language 01:01:32], it’s the family.

(01:01:35)
But nevertheless, I was forced to take a stand. That produced calls from Gulf states that ultimately led to clandestine meetings that ultimately flowered into the Abraham Accords then. And I think we’re at a point where the idea of ending the Arab-Israeli conflict, not the Palestinian-Israeli conflict, the Arab-Israeli conflict can happen. I’m not sure it will. It depends on quite a few things, but it could happen. And if it happens, it might open up the ending of the Israeli-Islamic conflict. Remember, the Arab world is a small part, it’s an important part, but it’s small. There are large Islamic populations and it could bring about an end to an historic enmity between Islam and Judaism. It could be a great thing.

(01:02:31)
So I’m looking at this larger thing. You can be hobbled by saying, “Oh, well, you’ve had this hiccup in Gaza or this or that thing happening in the Palestinians.” It’s important for us because we want security. But I think the larger question is can we break out into a much wider peace and ultimately come back and make the peace between Israel and the Palestinians rather than waiting to solve that and never getting to paint on the larger canvas? I want to paint on the larger canvas and come back to the Palestinian-Israeli conflict.

History

Lex Fridman
(01:03:16)
As you write about in your book, what have you learned about life from your father?
Benjamin Netanyahu
(01:03:21)
My father was a great historian and well, he taught me several things. He said that the first condition for a living organism is to identify danger in time, because if you don’t, you could be devoured. You could be destroyed very quickly. And that’s the nature of human conflict. In fact, for the Jewish people, we lost the capacity to identify danger in time, and we were almost devoured and destroyed by the Nazi threat. So when I see somebody parroting the Nazi goal of destroying the Jewish state, I try to mobilize the country and the world in time because I think Iran is a global threat, not only a threat to Israel. That’s the first thing.

(01:04:17)
The second thing is I once asked him, before I got elected, I said, “Well, what do you think is the most important quality for a prime minister of Israel?” And he came back with a question, “What do you think?” And I said, “Well, you have to have vision and you have to have the flexibility of navigating and working towards that vision. Be flexible, but understand where you’re heading.” And he said, “Well, you need that for anything. You need it if you’re a university president or if you’re a leader of a corporation or anything, anybody would’ve to have that.” I said, “All right, so what do you need to be the leader of Israel?” He came back to me with a word that stunned me. He said, “Education. You need a broad and deep education, or you’ll be at the mercy of your clerks or the press or whatever. You have to be able to do that.” Now, as I spend time in government, being reelected by the people of Israel, I recognize more and more how right he was.

(01:05:37)
You need to constantly ask yourself, “Where’s the direction we want to take the country? How do we achieve that goal?” But also understand that new disciplines are being added. You have to learn all the time. You have to add to your intellectual capital all the time. Kissinger said that he wrote that once you enter public life, you begin to draw on your intellectual capital and it’ll be depleted very quickly if you stay a long time. I disagree with that. I think you have to constantly increase your understanding of things as they change, because my father was right. You need to broaden and deepen your education as you go along. You can’t just sit back and say, “Well, I studied some things in university, or in college, or in Boston, or at MIT, and that’s enough. I’ve done it.” No, learn, learn, learn, learn. Never stop.
Lex Fridman
(01:06:34)
And if I may suggest as part of the education, I would add in a little literature, maybe Dostoevsky, in the plentiful of time you have as a prime minister to read.
Benjamin Netanyahu
(01:06:44)
Well, I read him, but I’ll tell you what I think is bigger than Dostoevsky.
Lex Fridman
(01:06:47)
Oh, no. Who’s that?
Benjamin Netanyahu
(01:06:49)
Not who’s that, but what’s that? Dan Rather came to see me with his grandson a few years ago. And the grandson asked me, he was a student in Ivy League college. He’s 18 years old and he wants to study to enter politics. And he said, “What’s the most important thing that I have to study to enter a political life?” And I said, “You have three things you have to study. Okay? History, history and history.” That’s the fundamental discipline for political life. But then you have to study other things, study economics, study politics and so on, and study the military if you have… I had an advantage because I spent some years there, so I learned a lot of that, but I had to acquire the other disciplines. And you never acquire enough. So read, read, read. And by the way, if I have to choose, I read history, history and history. Good works of history, not lousy books.

Survival

Lex Fridman
(01:08:02)
Last question. You’ve talked about a survival of a nation. You, yourself, are a mortal being. Do you contemplate your mortality? Do you contemplate your death? Are you afraid of death?
Benjamin Netanyahu
(01:08:15)
Aren’t you?
Lex Fridman
(01:08:16)
Yes.
Benjamin Netanyahu
(01:08:16)
Who is not? I mean, if you’re a conscience, if you’re a being with conscience, I mean, one of the unhappy things about the human brain is that it can contemplate its own demise. And so, we all make our compromises with this, but I think the question is what lives on? What lives on beyond us? And I think that you have to define how much of posterity do you want to influence. I cannot influence the course of humanity. We all are specs, little specs. So that’s not the issue. But in my case, I’ve devoted my life to a very defined purpose. And that is to assure the future and security, and I would say permanence, but that is obviously a limited thing, of the Jewish state and the Jewish people. I don’t think one can exist without the other. So I’ve devoted my life to that. And I hope that in my time on this Earth and in my years in office, I’d have contributed to that.
Lex Fridman
(01:09:29)
Well, you had one heck of a life, starting from MIT to six terms as prime minister. Thank you for this stroll through human history and for this conversation. It was an honor.
Benjamin Netanyahu
(01:09:44)
Thank you. And I hope you come back to Israel many times. Remember it’s the innovation nation. It’s a robust democracy. Don’t believe all the stuff that you are being told. It’ll remain that. It cannot be any other way. I’ll tell you the other thing, it’s the best ally of the United States, and its importance is growing by the day because our capacities in the information world are growing by the day. We need a coalition of the like-minded smarts. This is a smart nation. And we share the basic values of freedom and liberty with the United States. So the coalition of the smarts means Israel is the sixth eye and America has no better ally.
Lex Fridman
(01:10:33)
All right. Now off mic, I’m going to force you to finally tell me who is going to win. Elon Musk or Mark Zuckerberg? But it’s a good time that we ran out of time here.
Benjamin Netanyahu
(01:10:41)
I’ll tell you outside.
Lex Fridman
(01:10:44)
Thanks for listening to this conversation with Benjamin Netanyahu. To support this podcast, please check out our sponsors in the description. And now, let me leave you with some words from Mahatma Gandhi, “An eye for an eye will only make the whole world blind.” Thank you for listening and I hope to see you next time.

Transcript for Robert F. Kennedy Jr: CIA, Power, Corruption, War, Freedom, and Meaning | Lex Fridman Podcast #388

This is a transcript of Lex Fridman Podcast #388 with Robert F Kennedy Jr.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Introduction

Robert F. Kennedy Jr
(00:00:00)
It’s not our business to change the Russian government. And anybody who thinks it’s a good idea to do regime change in Russia, which has more nuclear weapons than we do, is I think irresponsible. And Vladimir Putin himself has had… We will not live in a world without Russia and it was clear when he said that, that he was talking about himself and he has his hand on a button that could bring Armageddon to the entire planet. So why are we messing with this? It’s not our job to change that regime, and we should be making friends with the Russians. We shouldn’t be treating him as an enemy. Now we’ve pushed him into the camp with China. That’s not a good thing for our country. And by the way, what we’re doing now does not appear to be weakening Putin at all.
Lex Fridman
(00:00:56)
The following is a conversation with Robert F. Kennedy Jr, candidate for the President of the United States, running as a Democrat. Robert is an activist, lawyer and author who has challenged some of the world’s most powerful corporations seeking to hold them accountable for the harm they may cause. I love science and engineering. These two pursuits are, to me the most beautiful and powerful in the history of human civilization. Science is our journey, our fight for uncovering the laws of nature and leveraging them to understand the universe and to lessen the amount of suffering in the world. Some of the greatest human beings I’ve ever met, including most of my good friends, are scientists and engineers. Again, I love science, but science cannot flourish without epistemic humility, without debate, both in the pages of academic journals and in the public square, in good faith, long form conversations.

(00:01:56)
Agree or disagree, I believe Robert’s voice should be part of the debate. To call him a conspiracy theorist and arrogantly dismiss everything he says without addressing it diminishes the public’s trust in the scientific process. At the same time, dogmatic skepticism of all scientific output on controversial topics like the pandemic is equally, if not more dishonest and destructive. I recommend that people read and listen to Robert F. Kennedy Jr, his arguments and his ideas. But I also recommend, as I say in this conversation, that people read and listen to Vincent Racaniello from This Week in Virology, Dan Wilson from Debunk The Funk, and the Twitter and books of Paul Offit, Eric Topol, and others who are outspoken in their disagreement with Robert.

(00:02:50)
It is disagreement, not conformity that bends the long arc of humanity toward truth and wisdom. In this process of disagreement, everybody has a lesson to teach you, but we must have the humility to hear it and to learn from it. This is The Lex Fridman podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Robert F. Kennedy Jr.

US history


(00:03:18)
It’s the 4th of July, Independence Day. So simple question, simple, big question. What do you love about this country, the United States of America?
Robert F. Kennedy Jr
(00:03:27)
I would say there’s so many things that I love about the country, the landscapes and the waterways and the people, et cetera. But on the higher level, people argue about whether we’re an exemplary nation, and that term has been given a bad name, particularly by the neocons, the actions, the neocons in recent decades who have turned that phrase into a justification for forcing people to adopt American systems or values at the barrel of a gun. But my father and uncle used it in a very different way, and they were very proud of it. I grew up very proud of this country because we were the exemplary nation in the sense that we were an example of democracy all over the world. When we first launched our democracy in 1780, we were the only democracy on earth. And there was Civil war, by 1865, there were six democracies.

(00:04:35)
Today there’s probably 190, and all of them in one way or another are modeled on the American experience. And it’s extraordinary because our first serious and sustained contact with the European culture and continent was in 1608 when John Winthrop came over with his Puritans in the sloop Arbella and Winthrop gave this famous speech where he said, “This is going to be a city on a hill. This is going to be an example for all the other nations in the world.” And he warned his fellow Puritans. They were sitting at this great expanse of land and he said, “We can’t be seduced by the lure of real estate or by the carnal opportunities of this land. We have to take this country as a gift from God and then turn it into an example for the rest of the world of God’s love, of God’s will and wisdom.” And 200 years later, 250 years later, a different generation, they’re mainly [inaudible 00:05:59], are people who had a belief in God, but not so much a love of particularly religious cosmologies.

(00:06:13)
The Framers of the Constitution believe that we were creating something that would be replicated around the world, and that it was an example in democracy. There would be this kind of wisdom from the collective that… And the word wisdom means a knowledge of God’s will, and that somehow God would speak through the collective in a way that he or she could not speak through totalitarian regimes. And I think that that’s something that even though Winthrop was a white man and a Protestant, that every immigrant group who came after them adopted that belief. And I know my family, when my family came over, all of my grandparents came over in 1848 during the potato famine, and they saw this country as unique in history is something that was part of a broader spiritual mission. And so I’d say that from a 30,000-foot level, I grew up so proud of this country and believing that it was the greatest country in the world, and for those reasons.

Freedom

Lex Fridman
(00:07:34)
Well, I immigrated to this country. And one of the things that really embodies America to me is the ideal of freedom. Hunter S. Thompson said, “Freedom is something that dies unless it’s used.” What does freedom mean to you?
Robert F. Kennedy Jr
(00:07:47)
To me, freedom does not mean chaos, and it does not mean anarchy. It means that it has to be accompanied by restraint if it’s going to live up to its promise in self-restraint. What it means is the capacity for human beings to exercise and to fulfill their creative energies unrestrained as much as possible by government.
Lex Fridman
(00:08:20)
So this point that Hunter S. Thompson has made is, “Dies unless it’s used.” Do you agree with that?
Robert F. Kennedy Jr
(00:08:28)
Yeah, I do agree with that, and he was not unique in saying that. Thomas Jefferson said that the Tree of Liberty had to be watered with the blood of each generation. And what he meant by that is that we can’t live off the laurels of the American Revolution. That we had a group, we had a generation where between 25,000 and 70,000 Americans died. They gave their lives, they gave their livelihoods, they gave their status, they gave their property, and they put it all on the line to give us our Bill of Rights and that, but those Bill of Rights, the moment that we signed them, there were forces within our society that began trying to chip away at them, and that happens in every generation. And it is the obligation of every generation to safeguard and protect those freedoms.

Camus

Lex Fridman
(00:09:26)
The blood of each generation. You mentioned your interest, your admiration of Al Albert Camus, of Stoicism, perhaps your interest in existentialism. Camus said, I believe in Myth of Sisyphus, “The only way to deal with an unfree world is to become so absolutely free that your very existence is an act of rebellion.” What do you think he means by that?
Robert F. Kennedy Jr
(00:09:49)
I suppose the way that Camus viewed the world and the way that the Stoics did and a lot of the existentialists, it was that it was so absurd and that the problems and the tasks that were given just to live a life are so insurmountable that the only way that we can get back the gods for giving us this impossible task of living life was to embrace it and to enjoy it and to do our best at it. To me, I read Camus, and particularly in The Myth of Sisyphus as a parable that… And it’s the same lesson that I think he writes about in The Plague, where we’re all given these insurmountable tasks in our lives, but that by doing our duty, by being of service to others, we can bring meaning to a meaningless chaos and we can bring order to the universe.

(00:11:01)
And Sisyphus was the iconic hero of the Stoics, and he was a man because he did something good. He delivered a gift to humanity. He angered the gods and they condemned him to push a rock up the hill every day, and then it would roll down. When he got to the top, it would roll down and he’d spend the night going back down the hill to collect it and then rolling it back up the hill again. And the task was absurd, it was insurmountable. He could never win, but the last line of that book is one of the great lines, which is something to the extent that I can picture as of his smiling, because Camus’ belief was that even though his task was insurmountable, that he was a happy man and he was a happy man because he put his shoulder to the stone.

(00:11:59)
He took his duty, he embraced the task and the absurdity of life, and he pushed the stone up the hill. And that if we do that, and if we find ways of being service to others, that is the ultimate, that’s the key to the lock, that’s the solution to the puzzle.
Lex Fridman
(00:12:21)
Each individual person in that way can rebel against absurdity by discovering meaning to this whole messy thing.
Robert F. Kennedy Jr
(00:12:28)
And we can bring meaning not only to our own lives, but we can bring meaning to the universe as well. We can bring some kind of order to life and the embrace of those tasks and the commitment to service resonates out from us to the rest of humanity in some way.

Hitler and WW2

Lex Fridman
(00:12:51)
So you mentioned The Plague by Camus. There’s a lot of different ways to read that book, but one of them, especially given how it was written, is that The Plague symbolizes Nazi Germany and the Hitler regime. What do you learn about human nature from a figure like Adolf Hitler, that he’s able to captivate the minds of millions, rise to power and take on, pull in the whole world into a global war?
Robert F. Kennedy Jr
(00:13:24)
I was born nine years after the end of World War II, and I grew up in a generation with my parents who were fixated on that, on what happened, and my father. At that time, the resolution in the minds of most Americans, and I think people around the world, is that there had been something wrong with the German people, that the Germans had been particularly susceptible to this kind of demagoguery and to following a powerful leader and just industrializing cruelty and murder. And my father always differed with that. My father said, “This is not a German problem. This could happen to all of us. We’re all just inches away from barbarity.” And the thing that keeps us safe in this country are the institutions of our democracy, our constitution. It’s not our nature. Our nature has to be restrained, and that comes through self-restraint.

(00:14:38)
But also, the beauty of our country is that we devise these institutions that are designed to allow us to flourish, but at the same time, not to give us enough freedom to flourish, but also create enough order to keep us from collapsing into barbarity. So one of the other things that my father talked about from when I was little, he would ask us this question, “If you were the family and Anne Frank came to your door and asked you to hide her, would you be one of the people who hid her, risk your own life, or would you be one of the people who turned her in?”

(00:15:24)
And of course, we would all say, “Well, of course we would hide Anne Frank and take the risk,” but that’s been something kind of a lesson, a challenge that has always been near the forefront of my mind, that if a totalitarian system ever a occurs in the United States, which my father thought was quite possible, he was conscious about how fragile democracy actually is, that would I be one of the ones who would resist the totalitarianism or would I be one of the people who went along with it? Would I be one of the people who was at the train station in crack hour, or even Berlin and saw people being shipped off to camps and just put my head down and pretend I didn’t say it because talking about it would be destructive to my career and maybe my freedom and even my life? So that has been a challenge that my father gave to me and all of my brothers and sisters, and it’s something that I’ve never forgotten.
Lex Fridman
(00:16:39)
A lot of us would like to believe we would resist in that situation, but the reality is most of us wouldn’t, and that’s a good thing to think about, that human nature is such that we’re selfish even when there’s an atrocity going on all around us.
Robert F. Kennedy Jr
(00:16:57)
And we also have the capacity to deceive ourselves, and all of us tend to judge ourselves by our intentions and our actions.
Lex Fridman
(00:17:08)
What have you learned about life from your father, Robert F. Kennedy?
Robert F. Kennedy Jr
(00:17:12)
First of all, I’ll say this about my uncle because I’m going to apply that question to my uncle and my father. My uncle was asked when he first met Jackie Bouvier, who later became Jackie Kennedy. She was a reporter for a newspaper and she had a column where she’d do these pithy interviews with both famous people and man in the street interviews. And she was interviewing him and she asked him what he believed his best quality was, his strongest virtue? And she thought that he would say courage because he had been a war hero. He was the only president who… And this is when he was Senator, by the way, who received the Purple Heart. And he had a very famous story of him as a hero in World War II. And then he had come home and he had written a book on moral courage among American politicians and won the Pulitzer Prize, that book Profiles and Courage, which was a series of incidents where American political leaders made decisions to embrace principle even though their careers were at stake, and in most cases were destroyed by their choice.

(00:18:37)
She thought he was going to say courage, but he didn’t. He said curiosity, and I think looking back at his life that the best, it was true, and that was the quality that allowed him to put himself in the shoes of his adversaries. And he always said that if the only way that we’re going to have peace is if we’re able to put ourselves in the shoes of our adversaries, understand their behavior and their contact, not context. And that’s why he was able to resist the intelligence apparatus and the military during the Bay of Pigs when they said, “You’ve got to send in the Essex, the aircraft carrier.” And he said, “No.” Even though he’d only been two months in office, he was able to stand up to them because he was able to put himself in the shoes of both Castro and Khrushchev and understand there’s got to be another solution to this.

(00:19:40)
And then during the Cuban Missile Crisis, he was able to endure it when the narrative was okay, Khrushchev acted in a way as an aggressor to put missiles in our hemisphere. How dare he do that? And Jack and my father were able to say, “Well, wait a minute. He’s doing that because we put missiles in Turkey and Italy, and the Turkish ones right on the Russian border.” And they then made a secret deal with Do Brennan, with Ambassador Do Brennan and with Khrushchev to remove the missiles in Turkey if he moved the Jupiter missiles from Turkey, so long as Khrushchev removed them from Cuba. There were 13 men on what they called the [inaudible 00:20:36] Committee, which was the group of people who were deciding what the action was, what they were going to do to end the Cuban Missile Crisis.

(00:20:45)
And virtually, and of those men, 11 of them wanted to invade and wanted to bomb and invade, and it was Jack. And then later on, my father and Bob McNamara, who were the only people who were with him, because he was able to see the world from Khrushchev’s point of view of view, he believed that there was another solution. And then he also had the moral courage. So my father, to get back to your question, famously said that, “Moral courage is the most important quality and it’s more rare,” and courage on the football field or courage in battle than physical courage. It’s much more difficult to come by, but it’s the most important quality in a human being.
Lex Fridman
(00:21:33)
And you think that kind of empathy that you referred to, that requires moral courage?
Robert F. Kennedy Jr
(00:21:37)
It certainly requires moral courage to act on it, and particularly in any time that a nation is at war, there’s a momentum or an inertia that says, “Okay, let’s not look at this from the other person’s point of view.” And that’s the time we really need to do that.

War in Ukraine

Lex Fridman
(00:22:03)
Well, if we’re can apply that style of empathy, style of curiosity to the current war in Ukraine, what is your understanding of why Russia invaded Ukraine in February 2022?
Robert F. Kennedy Jr
(00:22:16)
Vladimir Putin could have avoided the war in the Ukraine. His invasion was illegal. It was unnecessary, and it was brutal, but I think it’s important for us to move beyond these kind of comic book depictions of this insane, avaricious Russian leader who wants to restore the Soviet Empire, and who made unprovoked invasion of the Ukraine. He was provoked and we were provoking him and we were provoking him since 1997. And it’s not just me that’s saying that. And before Putin never came in, we were provoking Russia, the Russians in this way unnecessarily. And to go back that time in 1992 when the Russians moved out of… When the Soviet Union was collapsing, the Russians moved out of East Germany and they did that, which was a huge concession to them.

(00:23:27)
They had 400,000 troops in East Germany at that time, and they were facing NATO troops on the other side of the wall. Gorbachev made this huge concession where he said to George Bush, “I’m going to move all of our troops out, and you can then reunify Germany under NATO,” which was a hostile army to the Soviet… It was created with hostile intent toward the Soviet Union. And he said, “You can take Germany, but I want your promise that you will not move NATO to the east.” And James Baker, who was his Secretary of State famously said, “I will not move NATO. We will not move NATO one inch to the east.”

(00:24:07)
So then five years later in 1997, Zbigniew Brzezinski, who was the “father of the neocons,” who was a Democrat at that time, served in the Carter administration, he published a paper, a blueprint for moving NATO right up to the Russian border, a 1,000 miles to the east and taking over 14 nations. And at that time, George Kennan, who was the deity of American diplomats, he was arguably the most important diplomat in American history. He was the architect of the containment policy during World War II. And he said, “This is insane and it’s unnecessary. And if you do this, it’s going to provoke the Russians to a violent response. And we should be making friends with the Russians. They lost the Cold War. We should be treating them the way that we treated our adversaries after World War II, with a Marshall Plan to try to help them incorporate into Europe and to be part of the brotherhood of man and of western nations. We shouldn’t continue to be treating them as an enemy and particularly surrounding them at their borders.”

(00:25:26)
William Perry, who was then the Secretary of Defense under Bill Clinton, threatened to resign. He was so upset by this plan to move NATO to the east. And William Burns, who was then the US Ambassador to the Soviet Union, who is now at this moment, the Head of the CIA, said at the time, the same thing. “If you do this, it is going to provoke the Russians toward a military response.” And we moved all around Russia. We moved to 14 nations, a 1,000 miles to the east, and we put ageist missile systems in two nations, in Romania and Poland. So we did what the Russians had done to us in 1962 that would’ve provoked an invasion of Cuba. We put those missile systems back there, and then we’d walk away, unilaterally, walk away from the two nuclear missile treaties, the intermediate nuclear missile treaties that we had with the Soviet, with Russia, and neither of us would put those missile systems on the borders.

(00:26:31)
We walk away from that and we put ageist missile systems, which are nuclear capable. They can carry the Tomahawk missiles, which have nuclear warheads. So the last country that they didn’t take was the Ukraine. And the Russians said, and in fact, Bill Perry said this, or William Burns said it, now the Head of the CIA, “It is a red line. If we bring NATO into Ukraine, that is a red line for the Russians. They cannot live with it. They cannot live with it. Russia has been invaded three times through the Ukraine. The last time it was invaded, we killed, or the Germans killed one out of every seven Russians.”

(00:27:11)
My uncle described what happened to Russia in his famous American university speech in 1963, 60 years ago this month, or or last month, 60 years ago in June, June 10th, 1963. That speech was telling the American people, “Put yourself in the shoes of the Russians. We need to do that if we’re going to make peace.” And he said, “All of us have been taught that we won the war, but we didn’t win the war. If anybody won the war against Hitler, it was the Russians. Their country was destroyed, all of their cities.” And he said, “Imagine if all of the cities from the East Coast to Chicago were reduced to rubble and all of the fields burns, all of the forests burns. That’s what happened to Russia. That’s what they gave so that we could get rid of Adolf Hitler.”

(00:28:08)
And he had them put themselves in their position, and today there’s none of that happening. We have refused repeatedly to talk to the Russians. We’ve broken up, there’s two treaties, the Minsk Agreements, which the Russians were willing to sign, and they said, “We will stay out.” The Russians didn’t want the Ukraine. They showed that when the Donbas region voted 90 to 10 to leave and go to Russia. Putin said, “No, we want Ukraine to stay intact, but we want you to sign Minsk Accords.” The Russians were very worried because of the US involvement and the coup in Ukraine in 2014, and then the oppression and the killing of 14,000 ethnic Russians, and Russia hasn’t had the same way that if Mexico would ageist missile systems from China or Russia on our border and then killed 14,000 expats American, we would go in there.

(00:29:13)
Oh, he does have a national security interest in the Ukraine. He has an interest in protecting the Russian-speaking people of the Ukraine, the ethnic Russians, and the Minsk Accords did that. It left Ukraine as part of Russia. It left them as a semi-autonomous region that continued to use their own language, which is essentially banned by the coup, by the government we put in 2014, and we sabotaged that agreement. And we now know in April of 2022, Zelenskyy and Putin had inked a deal already to another peace agreement, and that the United States and Boris Johnson, the neocons in the White House and Boris Johnson over to the Ukraine to sabotage that agreement.
Robert F. Kennedy Jr
(00:30:03)
… Boris Johnson over to the Ukraine to sabotage that agreement. What do I think? I think this is a proxy war. I think this is a war that the neocons and the White House wanted. They’ve said for two decades they wanted this war and that they wanted to use Ukraine as a pawn in a proxy war between United States and Russia, the same as we used Afghanistan.

(00:30:26)
And in fact, they say it, “This is the model. Let’s use the Afghanistan model.” That was said again and again. And to get the Russians to overextend their troops and then fight them using local fighters and US weapons.

(00:30:40)
And when President Biden was asked, “Why are we in the Ukraine?” He was honest. He says, “To depose Vladimir Putin. Regime change for Vladimir Putin.” And when his defense secretary Lloyd Austin in April 2022 was asked, “Why are we there?” He said, “To degrade the Russians’ capacity to fight anywhere… To exhaust the Russian army and degrade its capacity to fight elsewhere in the world.”

(00:31:05)
That’s not a humanitarian mission. That’s not what we were told. We were told this was an unprovoked invasion and that we’re there to bring humanitarian relief to the Ukrainians. But that is the opposite. That is a war of attrition that is designed to chew up and turn this little nation into an abattoir of death for the flower of Ukrainian youth in order to advance a geopolitical ambition of certain people within the White House. And I think that’s wrong.

(00:31:39)
We should be talking to the Russians the way that Nixon talked to Brezhnev, the way that Bush talked to Gorbachev, the way that my uncle talked to Khrushchev. We need to be talking with the Russians, we should, and negotiating. And we need to be looking about how do we end this and preserve peace in Europe.
Lex Fridman
(00:31:58)
Would you as president sit down and have a conversation with Vladimir Putin and Volodymyr Zelenskyy separately and together to negotiate peace?
Robert F. Kennedy Jr
(00:32:07)
Absolutely. Absolutely.
Lex Fridman
(00:32:09)
What about Vladimir Putin? He’s been in power since 2000. So as the old adage goes, “Power corrupts, and absolute power corrupts absolutely.” Do you think he has been corrupted by being in power for so long, if you think of the man, if you look at his mind?
Robert F. Kennedy Jr
(00:32:27)
Listen, I don’t know exactly. I can’t say because I don’t know enough about him or about… The evidence that I’ve seen is that he is homicidal. He kills his enemies or poisons them. And the reaction I’ve seen to that, to hit those accusations from him have not been to deny that but to kind of laugh it off.

(00:32:58)
Oh, I think he’s a dangerous man and that, of course, there’s probably corruption in his regime. But having said that, it’s not our business to change the Russian government. And anybody who thinks it’s a good idea to do a regime change in Russia, which has more nuclear weapons than we do, is I think irresponsible.

(00:33:22)
And Vladimir Putin himself has said, “We will not live in a world without Russia.” And it was clear when he said that he was talking about himself. And he has his hand on a button that could bring Armageddon to the entire planet.

(00:33:40)
So why are we messing with this? It’s not our job to change that regime. We should be making friends with the Russians. We shouldn’t be treating him as an enemy. Now we’ve pushed him into the camp with China. That’s not a good thing for our country.

(00:33:55)
And by the way, what we’re doing now does not appear to be weakening Putin at all. Putin now, if you believe the polls that are coming out of Russia, they show him… the most recent polls that I’ve seen show him with an 89% popularity that people in Russia support the war in Ukraine, and they support him as an individual.

(00:34:25)
And I understand there’s problems with polling and you don’t know what to believe, but the polls consistently show that. And it’s not America’s business to be the policemen of the world and to be changing regimes in the world. That’s illegal.

(00:34:41)
We shouldn’t be breaking international laws. We should actually be looking for ways to improve relationships with Russia, not to destroy Russia, not to destroy, and not to choose its leadership for them. That’s up to the Russian people, not us.
Lex Fridman
(00:35:00)
Step one is to sit down and empathize with the leaders of both nations to understand their history, their concerns, their hopes, just to open the door for conversation so they’re not back to the corner.
Robert F. Kennedy Jr
(00:35:12)
Yeah. And I think the US can play a really important role, and a US president can play a really important role by reassuring the Russians that we’re not going to consider them an enemy anymore, that we want to be friends.

(00:35:26)
And it doesn’t mean that you have to let down your guard completely. The way that you do it, which was the way President Kennedy did it, is you do it one step at a time. You take baby steps. We do a unilateral move, reduce our hostility and aggression, and see if the Russians reciprocate. And that’s the way that we should be doing it.

(00:35:50)
And we should be easing our way into a positive relationship with Russia. We have a lot in common with Russia, and we should be friends with Russia and with the Russian people. Apparently, there’s been 350,000 Ukrainians who have died, at least, in this war. And there’s probably been 60,000 or 80,000 Russians. And that should not give us any joy. It should not give us any…

(00:36:21)
I saw Lindsey Graham on TV saying something to the extent of, “Anything we can do to kill Russians is a good use of our money.” It is not. Those are somebody’s children. We should have compassion for them. This war is an unnecessary war. We should settle it through negotiation, through diplomacy, through state graft, and not through weapons.
Lex Fridman
(00:36:50)
Do you think this war can come to an end purely through military operations?
Robert F. Kennedy Jr
(00:36:55)
No. I mean, I don’t think there’s any way in the world that the Ukrainians can beat the Russians. I don’t think there’s any appetite in Europe… I think Europe is now having severe problems. In Germany, Italy, France, you’re seeing these riots. There’s internal problems in those countries.

(00:37:12)
There is no appetite in Europe for sending men to die in Ukraine. And the Ukrainians do not have anybody left. The Ukrainians are using press gangs to fill the ranks of their armies. Military-age men are trying as hard as they can to get out of the Ukraine right now to avoid going to the front.

(00:37:35)
The Russians apparently have been killing Ukrainians in a 7:1 ratio. My son fought over there, and he told me… He had firefights with the Russians mainly at night, but he said most of the battles were artillery wars during the day. And the Russians now outgun the NATO forces 10:1 in artillery. They’re killing at a horrendous rate.

(00:38:06)
Now, my interpretation of what’s happened so far is that Putin actually went in early on with a small force because he expected to meet somebody on the other end of a negotiating table once he went in. And when that didn’t happen, they did not have a large enough force to be able to mount an offensive.

(00:38:32)
And so they’ve been building up that force up till now, and they now have that force. And even against the small original force, the Ukrainians have been helpless. All of their offenses have died. They’ve now killed the head of the Ukrainian special forces, which was probably, arguably, by many accounts, the best elite military unit in all of Europe.

(00:39:01)
The commandant, the commander of that special forces group gave a speech about four months ago saying that 86% of his men are dead or wounded and cannot return to the front. He cannot rebuild that force. And the troops that are now filling the gaps of all those 350,000 men who’ve been lost are scantily trained, and they’re arriving green at the front.

(00:39:36)
Many of them do not want to be there. Many of them are giving up and going over to the Russian side. We’ve seen this again and again and again, including platoon-sized groups that are defecting to the Russians.

(00:39:48)
And I don’t think it’s possible to win. Of course, I’ve studied World War II history exhaustively, but I saw… There’s a new… I think it’s a Netflix series of documentaries that I highly recommend to people there. They’re colorized versions of the black-and-white films from the battles of World War II, but it’s all the battles of World War II.

(00:40:15)
So I watched Stalingrad the other night. And the willingness of the Russians to fight on against any kind of odds and to make huge sacrifices of Russians, the Russians themselves who are making the sacrifice with their lives, the willingness of them to do that for their motherland is almost inexhaustible.

(00:40:40)
It is incomprehensible to think that Ukraine can beat Russia in a war. It would be like Mexico beating the United States. It’s impossible to think that it can happen. And Russia has deployed a tiny, tiny fraction of its military so far. And now it has China with its mass production capacity supporting its war effort. It’s a hopeless situation.

(00:41:11)
And we’ve been lied to. The press in our country and our government are just promoting this lie that the Ukrainians are about to win and that everything’s going great and that Putin’s on the run. And there’s all this wishful thinking because of the Wagner Group-
Lex Fridman
(00:41:30)
Prigozhin.
Robert F. Kennedy Jr
(00:41:30)
… Prigozhin and the Wagner Group, that this was an internal coup, and it showed dissent and weakness of Putin. And none of that is true. That insurgency, which wasn’t even an insurgency…

(00:41:44)
He only got 4,000 of his men to follow him out of 20,000. And they were quickly stopped. And nobody in the Russian military, the oligarchy, the political system, nobody supported it. But we’re being told, “Oh yeah, it’s the beginning of the end for Putin. He’s weakened. He’s wounded. He’s on his way out.” And all of these things are just lies that we are being fed.
Lex Fridman
(00:42:07)
To push back on a small aspect of this that you kind of implied, so I’ve traveled to Ukraine, and one thing that I should say, similar to the Battle of Stalingrad, it is not only the Russians that fight to the end. I think Ukrainians are very lucky to fight to the end.

(00:42:24)
And the morale there is quite high. I’ve talked to nobody… This was a year ago in August with Kherson. Everybody was proud to fight and die for their country. And there’s some aspect where this war unified the people, gave them a reason and an understanding that this is what it means to be Ukrainian and, “I will fight to the death to defend this land.”
Robert F. Kennedy Jr
(00:42:48)
I would agree with that, and I should have said that myself at the beginning. That’s one of the reasons my son went over there to fight because he was inspired by the valor of the Ukrainian people and this extraordinary willingness of them.

(00:43:02)
And I think Putin thought it would be much easier to sweep into Ukraine, and he found a stone wall of Ukrainians ready to put their lives and their bodies on the line. But that, to me, makes the whole episode even more tragic, is that I don’t believe… I think that the US’s role in this has been… There were many opportunities to settle this war, and the Ukrainians wanted to settle it.

(00:43:34)
Volodymyr Zelenskyy, when he ran in 2019, here’s a guy who’s a comedian, he’s an actor. He had no political experience, and yet he won this election with 70% of the vote. Why? He won on a peace platform, and he won promising to sign the Minsk accords. And yet something happened when he got in there that made him suddenly pivot. And I think it’s a good guess what happened.

(00:44:02)
I think he came under threat by ultra-nationalists within his own administration and the insistence of neocons like Victoria Nuland and the White House, that we don’t want peace with Putin. We want a war.
Lex Fridman
(00:44:20)
Do you worry about nuclear war?
Robert F. Kennedy Jr
(00:44:22)
Yeah, I worry about it.
Lex Fridman
(00:44:25)
It seems like a silly question, but it’s not. It’s a serious question.
Robert F. Kennedy Jr
(00:44:29)
Well, the reason it’s not is just because people seem to be in this kind of dream state that it’ll never happen, and yet it can happen very easily and it can happen at any time.

(00:44:48)
And if we push the Russians too far, I don’t doubt that Putin, if he felt like his regime or his nation was in danger, that the United States was going to be able to place a quisling into the Kremlin, that he would use nuclear torpedoes and these strategic weapons that they have. And that could be it. Once you do that, nobody controls the trajectory.

JFK and the Cuban Missile Crisis


(00:45:24)
By the way, I have very strong memories of the Cuban Missile Crisis and of those 13 days when we came closer to nuclear war. And particularly, I think it was when the U-2 got shot down over Cuba. And nobody in this country… There’s a lot of people in Washington, D.C., who, at that point, thought that they very well may wake up dead, that the world may end at night.

(00:45:55)
30 million Americans killed 130 million Russians. This is what our military brass wanted. They saw a war with Russia, a nuclear exchange with Russia as not only inevitable but also desirable because they wanted to do it now while we still had superiority.
Lex Fridman
(00:46:14)
Can you actually go through the feelings you’ve had about the Cuban Missile Crisis? What are your memories of it? What are some interesting-
Robert F. Kennedy Jr
(00:46:21)
I was going to school in Washington, D.C. to Our Lady of Victory, which is in Washington, D.C. I lived in Virginia across the Potomac, and we would cross the bridge every day into D.C.

(00:46:38)
And during the crisis, U.S. Marshals came to my house to take us, I think around day eight. My father was spending the night at the White House. He wasn’t coming home. He was staying with the EXCOM committee and sleeping there. And they were up 24 hours a day. They were debating and trying to figure out what was happening.

(00:47:00)
But we had U.S. Marshals come to our house to take us down… They were going to take us down to White Sulphur Springs in Southern Virginia, in the Blue Ridge Mountains, where there was an underground city, essentially, a bunker that was like a city. And apparently, it had McDonald’s in it and a lot of other… It was a full city for the U.S. Government and their families.

(00:47:29)
U.S. Marshals came to our house to take us down there. And I was very excited about doing that. And this was at a time when we were doing the drills. We were doing the duck-and-cover drills once a week at our school, where they would tell you when the alarms go off, then you put your head under the table, you remove the sharps from your desk, put them inside your desk, you put your head under the table, and you wait.

(00:47:56)
And the initial blast will take the windows out of the school. And then we all stand up and file in an orderly fashion into the basement where we’re going to be for the next six or eight months or whatever.

(00:48:08)
But in the basement where we went occasionally, those corridors were lined with freeze-dried food canisters from floor to ceiling. We were all preparing for this. And it was Bob McNamara, who was a friend of mine, and was one of my father’s close friends, the Secretary of Defense, he later called it mass psychosis.

(00:48:34)
And my father deeply regretted participating in the bomb shelter program because he said it was part of a psychological psyop trick to teach Americans that nuclear war was acceptable, that it was survivable. My father, anyway, when the Marshals came to our house to take me and my brother Joe away, we were the ones who were home at that time, my father called, and he talked to us on the phone.

(00:49:05)
And he said, “I don’t want you going down there because if you disappear from school, people are going to panic. And I need you to be a good soldier and go to school.” And he said something to me during that period, which was that if a nuclear war happened, it would be better to be among the dead than the living, which I did not believe. Okay?

(00:49:31)
I had already prepared myself for the dystopian future. And I knew… I spent every day in the woods. I knew that I could survive by catching crawfish and cooking mudpuppies and would do whatever I had to do. But I felt like, okay, I can handle this. And I really wanted to see this underground city. But anyway, that was part of it for me.

(00:50:01)
My father was away the last days of it. My father got this idea because Khrushchev had sent two letters. He sent one letter that was conciliatory. And then he sent a letter that after his joint chiefs and the warmongers around him saw that letter and they disapproved of it, they sent another letter that was extremely belligerent.

(00:50:25)
And my father had the idea, “Let’s just pretend we didn’t get the second letter and reply to the first one.” And then he went down to Dobrynin. He met Dobrynin in the Justice Department. And Dobrynin was the Soviet ambassador. And they proposed this settlement, which was a secret settlement, where Khrushchev would withdraw the missiles from Cuba.

(00:50:52)
Khrushchev had put the missiles in Cuba because we had put missiles, nuclear missiles, in Turkey and Italy. And my uncle’s secret deal was that if Khrushchev removed the missiles from Cuba within six months, he would get rid of the Jupiter missiles in Turkey.

(00:51:10)
But if Khrushchev told anybody about the deal, it was off. So if news got out about that secret deal, it was off. But that was the actual deal. And Khrushchev complied with it, and then my uncle complied with it.
Lex Fridman
(00:51:25)
How much of that part of human history turned on the decisions of one person?
Robert F. Kennedy Jr
(00:51:31)
I think that’s one of the… Because that, of course, is the perennial question. Right? Is history on automatic pilot? And human decisions and the decisions of leaders really only have a marginal or incremental bearing on what is going to happen anyway. And historians argue about that all the time.

(00:51:57)
I think that that is a really good example of a place in human history that, literally, the world could have ended if we had a different leader in the White House. And the reason for that is that there were, as I recall, 64 gun emplacements, missile emplacements. Each one of those missile emplacements had a crew of about 100 men, and they were Soviets.

(00:52:29)
We didn’t know whether… We had a couple of questions that my uncle asked the CIA. And he asked… Dulles was already gone. But he asked the CIA. And he asked his military brass. Because they all wanted to go in. Everybody wanted to go in. And my uncle asked to see the aerial photos, and he examined those personally.

(00:52:53)
And this is why it’s important to have a leader in the White House who can push back on their bureaucracies. And then he asked them, “Who’s manning those missile sites? And are they Russians? And if they’re Russians and we bomb them, isn’t it going to force Khrushchev to then go into Berlin?”

(00:53:20)
And that would be the beginning of a cascade effect that would highly likely end in a nuclear confrontation. And the military brass said to my uncle, “Oh, we don’t think he’ll have the guts to do that.” My uncle was like, “That’s what you’re betting on?”

(00:53:42)
And they all wanted him to go in. They wanted him to bomb the sites and then invade Cuba. And he said, “If we bomb those sites, we’re going to be killing Russians. And it’s going to force… it’s going to provoke Russia into some response. And the obvious response is for them to go into Berlin.”

(00:54:02)
But the thing that we didn’t know then, that we didn’t find out until, I think it was a 30-year anniversary of the Cuban Missile Crisis in Havana, what we learned then from the Russians who came to that event… It was like a symposium where everybody on both sides talked about it. And we learned a lot of stuff that nobody knew before.

(00:54:30)
One of the insane things, the most insane thing that we learned was that the weapons were already… the nuclear warheads were already in place, they were ready to fire, and that the authorization to fire was delegated to each of the gun crew commanders. So there were 60 people who all had authorization to fire if they felt themselves under attack.

(00:54:59)
So you have to believe that at least one of them would’ve launched, and that would’ve been the beginning of the end. And if anybody had launched, we knew what would happen. My uncle knew what would happen. Because he asked again and again, “What’s going to happen?” And they said, “30 million Americans will be killed, but we will kill 130 million Russians, so we will win.” And that was a victory for them.

(00:55:28)
And my uncle later said, he told Arthur Schlesinger and Kenny O’Donnell, he said, “Those guys…” He called them the salad brass, the guys with all of this stuff on their chest. And he said, “Those guys, they don’t care. Because they know that if it happens, they’re going to be in charge of everything. They’re the ones who are going to be running the world after that.”

(00:55:51)
So for them, there was an incentive to kill 130 million Russians and 30 million Americans. But my uncle, he had this correspondence with Khrushchev. They were secretly corresponding with each other. And that is what saved the world, is that both of them had been men of war.

(00:56:10)
Eisenhower famously said, “It will not be a man of war, it will not be a soldier who starts World War III. Because a guy who’s actually seen it knows how bad it is.” And my uncle had been in the heat of the South Pacific. His boat had been cut in two by a Japanese destroyer.

(00:56:30)
Three of his crewmen had been killed, one of them badly burned. He pulled that guy with a lanyard and his teeth, six miles to an island in the middle of the night. And then they hid out there for 10 days. And he came back. Like I said, he was the only President of the United States that earned the Purple Heart.

(00:56:50)
Meanwhile, Khrushchev had been at Stalingrad, which was the worst place to be on the planet, probably in the 20th century, other than in Auschwitz or one of the death camps. It was the most ferocious, horrific war with people starving, people committed cannibalism, eating the dogs, the cats, eating their shoe leather, easing to death by the thousands, etc.

(00:57:19)
Khrushchev did not want… The last thing he wanted was a war. And the last thing my uncle wanted was a war. But the CIA did not know anything about Khrushchev. And the reason for that is there was a mole at Langley so that every time the CIA got a spy in the Kremlin, he would immediately be killed.

(00:57:43)
So they had no eyes in the Kremlin. There were literally hundreds of Russian spies who had defected to the United States and were in the Kremlin who were killed during that period. They had no idea anything about Khrushchev, about how he saw the world. And they saw the Kremlin itself as a monolith.

(00:58:06)
The same way that we look at Putin today, they have this ambition of world conquest and it’s driving them. And there’s nothing else they think about. They’re absolutely single-minded about it.

(00:58:18)
But actually, there was a big division between Khrushchev and his joint chiefs and his intelligence apparatus. And they both, at one point, discovered they were both in the same situation. They were surrounded by spies and military personnel who were intent on going to war, and they were the two guys resisting it.

(00:58:39)
My uncle had this idea of being the peace president from the beginning. He told Ben Bradlee, one of his best friends who was the publisher of The Washington Post or the editor-in-chief at that time. He said Ben Bradlee asked him, “What do you want on your gravestone?” And my uncle said, “He kept the peace.” He said, “The principal job of the President of the United States is to keep the country out of war.”

(00:59:11)
So when he first became president, he actually agreed to meet Khrushchev in Geneva to do a summit. And by the way, Eisenhower had wanted to do the same thing. Eisenhower wanted peace, and he was going to meet in Vienna. But that peace summit was blown up. He was going to try to end the Cold War.

(00:59:37)
Eisenhower was in the last year of his… in May of 1960. But that was torpedoed by the CIA during the U-2 crash. They sent a U-2 over the Soviet Union, it got shot down. And then Allen Dulles told Eisenhower to deny that we had a program. They didn’t know that the Russians had captured Gary Francis powers.
Robert F. Kennedy Jr
(01:00:00)
…France’s powers. And that blew up the peace talks between Eisenhower and Khrushchev and there was a lot of tension. My uncle wanted to break that tension. He agreed to meet with Khrushchev in Vienna early on in his term. He went over there and Khrushchev snubbed him. Khrushchev lectured him imperiously about the terror of American imperialism, and rebuffed any… They did agree not to go into Laos. They made an agreement that kept the United States, kept my uncle, from sending troops to Laos, but it had been a disaster in Vienna.

(01:00:48)
So then, we had a spy that used to come to our house all the time, a guy called Georgi Bolshakov, and he was this Russian spy my parents had met at the embassy. They had gone to a party or a reception at the Russian Embassy, and he had approached them and they knew he was a GRU agent and KGB, he was both, oh, he used to come to our house. They really liked him. He was very attractive. He was always laughing and joking. He would do rope climbing contests with my father. He would do pushup contests with my father. He could do the Russian dancing, the Cossack dancing, and he would do that for us and teach us that. And we knew he was a spy too, and this was at the time of the James Bond films were first coming out, so it was really exciting for us to have an actual Russian spy in our house. The State Department was horrified by it.

(01:01:44)
But anyway, when Khrushchev, after Vienna, and after the Bay of Pigs, Khrushchev had second thoughts and he sent this long letter to my uncle, and he didn’t want to go through his state department or his embassy, he wanted to end run them. And he was friends with Bolshakov, so he gave Georgi the letter, and Georgi brought it and handed it to Pierre Salinger, folded in the New York Times. And he gave it to my uncle.

(01:02:21)
And it was this beautiful letter, which he said, my uncle had talked to him about the children who had played, we played, 29 grandchildren who were playing in his yard. And he’s saying, what is our moral basis for making a decision that could kill these children? So they’ll never write a poem, they’ll never participate in election, they’ll never run for office. How can we morally, make a decision that is going to eliminate life for these beautiful kids?

(01:02:52)
And he had said that to Khrushchev, and Khrushchev wrote them this letter back saying that he was now sitting as this dacha on the Black Sea, and that he was thinking about what my uncle Jack had said to him at Vienna. And he regretted very deeply not having taken the olive leaf that Jack had offered him. And then he said, it occurs to me now that we’re all on an arc and that there is not another one, and that the entire fate of the planet, and all of its creatures and all of the children are dependent on the decisions we make. And you and I have a moral obligation to go forward with each other as friends.

(01:03:34)
And immediately after that, he sent that right after the Berlin crisis in 1962, General Curtis LeMay had tried to provoke a war with an incident at Checkpoint Charlie, which was the entrance and exit, through the Berlin Wall in Berlin. And the Russian tanks had come to the wall. The US tanks had come to the wall and there was a standoff. And my uncle had sent a message to Khrushchev then through Do Brennan saying, my back is at the wall. I have no place to back to, please back off, and then we will back off. And Khrushchev took his word, backed his tanks off first, and then my uncle ordered LeMay back. He had, LeMay had mounted bulldozer plows on the front of the tanks to plow down the Berlin wall, and the Russians had come, so it was these generals trying to provoke a war.

(01:04:44)
But they started talking to each other then. And then after he wrote that letter, they agreed that they would install a hotline, so they could talk to each other and they wouldn’t have to go through intermediaries. And so at Jack’s house on the Cape, there was a red phone that we knew if we picked it up, Khrushchev would answer. And there was another one in the White House. But they knew it was important to talk to each other. And you just wish that we had that kind of leadership today, that just understands our job.

(01:05:21)
Look, I know you know a lot about AI, and you know how dangerous it is, potentially to humanity, and what opportunities is it also offers, but it could kill us all. I mean, Elon said, first it’s going to steal our job, then it’s going to kill us. Right? And it’s probably not a hyperbole. Actually, if it follows the laws of biological evolution, which are just the laws of mathematics, that’s probably a good endpoint for it, a potential endpoint. It’s going to happen, but we need to make sure it’s regulated, and it’s regulated properly for safety, in every country. And that includes Russia and China and Iran. Right now, we should be putting all the weapons of war aside and sitting down with those guys and say, how are we going to do this? There’s much more important things to do. This stuff is going to kill us, if we don’t figure out how to regulate it. And leadership needs to look down the road at what is the real risk here. And the real risk is that AI will enslave us, for one thing, and then destroy us, and do all this other stuff.

(01:06:42)
And how about biological weapons? We’re now all working on these biological weapons, and we’re doing biological weapons for Ebola, and Dengue Fever, and all of these other bad things. And we’re making ethnic bio-weapons, bio-weapons that can only kill Russians, bio-weapons that the Chinese are making that can kill people who don’t have Chinese genes. So all of this is now within reach. We’re actively doing it, and we need to stop it. And a biological weapons treaty is the easiest thing in the world to do. We can verify it, we can enforce it, and everybody wants to agree to it. Only insane people do not want to continue this kind of research, there’s no reason to do it.

(01:07:33)
So there are these existential threats to all of humanity now out there, like AI and biological weapons. We need to stop fighting each other, start competing on economic game fields, playing fields, instead of military playing fields, which will be good for all of humanity. And we need to sit down with each other, and negotiate reasonable treaties on how we regulate AI and biological weapons. And nobody’s talking about this in this political race right now. Nobody’s talking about it in a government. They get fixated on these little wars, and these comic book depictions of good versus evil, and we all go, hoorah and go off to and give them the weapons and enrich the military industrial complex, but we’re on the road to perdition if we don’t end this.
Lex Fridman
(01:08:29)
And some of this requires to have this kind of phone that connects Khrushchev and John F. Kennedy that cuts through all the bureaucracy, to have this communication between heads of State, and in the case of AI, perhaps heads of tech companies where you can just pick up the phone and have a conversation.
Robert F. Kennedy Jr
(01:08:46)
Yes.
Lex Fridman
(01:08:46)
Because a lot of it, a lot of the existential threats of artificial intelligence, perhaps even bio-weapons, is unintentional. It’s not even strategic-
Robert F. Kennedy Jr
(01:08:56)
Exactly.
Lex Fridman
(01:08:56)
-intentional effects, so you have to be transparent and honest about, especially with AI, that people might not know what’s the worst that’s going to happen once you release it out into the wild? And you have to have an honest communication about how to do it, so that companies are not terrified of regulation, overreach regulation. And then government is not terrified of tech companies, of manipulating them in some direct or indirect ways, so there’s a trust that builds versus a distrust. Basically, that old phone, where Khrushchev can call John F. Kennedy, is needed.
Robert F. Kennedy Jr
(01:09:35)
And I don’t think there’s… Listen, I don’t understand AI. I do know, I can see from all this technology, how it’s this turnkey totalitarianism, that once you put these systems in place, they can be misused to enslave people, and they can be misused in wars, and to subjugate, to kill, to do all of these bad things. And I don’t think there’s anybody on Capitol Hill, who understands this. We need to bring in the tech community and say, tell us what these regulations need to look like, so that there can be freedom to innovate, so that we can milk AI for all of the good things, but not fall into these traps that pose existential threats to humanity.

JFK assassination conspiracy

Lex Fridman
(01:10:31)
It seems like John F. Kennedy is a singular figure, in that he was able to have the humility to reach out to Khrushchev, and also the strength and integrity to resist the, what did you call them, the salad brass and institutions like the CIA, so that makes it particularly tragic that he was killed. To what degree was CIA involved, or the various bureaucracy involved in his death?
Robert F. Kennedy Jr
(01:11:00)
The evidence that the CIA was involved in my uncle’s murder, and that they were subsequently involved in the coverup, and continue to be involved in the coverup, I mean, there’s still 5,000 documents that they won’t release 60 years later, is I think, so insurmountable and so mountainous and overwhelming, that it’s beyond any reasonable doubt, including dozens of confessions of people who were involved in the assassination. But every kind of document, and I mean, it came as a surprise recently to most Americans, I think, the release of these documents in which the press, the American media, finally acknowledged that, yeah, Lee Harvey Oswald was the CIA asset, that he was recruited in 1957. He was a Marine working at the Atsugi Air Force Base, which was the CIA Air Force base with the U2 flights, which was a CIA program. And that he was recruited by James Jesus Angleton, who was the director of counterintelligence and then sent on a fake defection to Russia and then brought back to Dallas.

(01:12:34)
And people didn’t know that, even though it’s been known for decades, it never percolated into the mainstream media, because they have such an allergy to anything that challenges the Warren Report. When Congress investigated my uncle’s murder in the 1970s, the Church committee did, and they did a two and a half year investigation, and they had many, many more documents, and much more testimony available to them than the Warren Commission had, and this was a decade after the Warren Commission. They came to the conclusion that my uncle was killed by a conspiracy. And there was a division where essentially one guy on that committee believed it was primarily the mafia, but Richard Schweitzer was the senator at head of the committee, said straight out, the CIA was involved in the murder of the President of the United States.

(01:13:42)
I’ve talked to most of the staff on that committee, and they said, yeah, and the CIA was stonewalling us the whole way through. And the actual people that the CIA appointed, George Johanedees, who the CIA appointed as a liaison to the committee, they brought him out of retirement, he had been one of the masterminds of the assassination.

(01:14:06)
I mean, it’s impossible to even talk about a tiny of the fraction of the evidence here. What I suggest to people, there are hundreds of books written about this, that assemble this evidence and mobilize the evidence. The best book to me, for people to read is James Douglass’s book, which is called, The Unspeakable. And he, Douglass does this extraordinary. He is an extraordinary scholar, and he does this just an amazing job of digesting and summarizing and mobilizing all of them, probably a million documents, and the evidence from all these confessions that have come out, into a coherent story. And it’s riveting to read. And I recommend people, do not take my word for it, and don’t take anybody else’s word for it, go ahead and do the research yourself. And one way to do that is probably the most efficient way, is to read Douglas’s book. He has all the references there.
Lex Fridman
(01:15:08)
So if it’s true that CIA had a hand in this assassination, how is it possible for them to amass so much power? How is it possible for them to become corrupt? And is it individuals, or is it the entire institution?
Robert F. Kennedy Jr
(01:15:22)
No, it’s not the entire institution. My daughter-in-law, who’s helping to run my campaign was a CIA, in the clandestine for all her career. She was a spy in the Weapons of Mass Destruction program in the Middle East and in China. And there’s 22,000 people who work for the CIA, probably 20,000 of those are patriotic Americans and really good public servants, and they’re doing important work for our country. But the institution is corrupt, and because the higher up ranks the institution. And in fact, Mike Pompeo said something like this to me the other day. He was the director of the CIA. He said, “When I was there, I did not do a good job of cleaning up that agency.” And he said, “The entire upper bureaucracy of that agency, are people who do not believe in the institutions of democracy.” This is what he said to me. I don’t know if that’s true, but I know that that’s significant. He’s a smart person, and he ran the agency and he was the Secretary of State.

(01:16:32)
But it’s no mystery how that happened. We know the history. The CIA was originally…First of all, there was great reluctance in 1947, that for the first time, we had a secret spy agency in this country during World War II, called the OSS. That was disbanded after the war, because Congress said, having a secret spy agency is incompatible with a democracy. The secret spy agency are things like the KGB, the STASI in East Germany, SAVAK in Iran, and PEEP, and Chile and whatever, all over the world, they’re all have to do with totalitarian governments. They’re not something that you can have that, it’s antithetical to democracy to have that. But in 1947, we created, Truman signed it in, but it was initially an espionage agency, which means information gathering, which is important. It’s to gather and consolidate information from many, many different sources from all over the world, and then put those in reports for the White House, so the president can make good decisions based upon valid information, evidence-based decision making.

(01:17:57)
But Alan Dulles, who was essentially the first head of the agency, made a series of legislative imaginations and political imaginations, that gave additional powers to the agency, and opened up what they called then the plans division, which is the plans division is the dirty tricks, it’s the black ops, fixing elections, murdering, what they call executive action, which means killing foreign leaders, and making small wars, and bribing, and blackmailing, people stealing elections, and that kind of thing. And the reason, at that time, we were in the middle of the Cold War and Truman, and then Eisenhower did not want to go to war. They didn’t want to commit troops. And it seemed to them that this was a way of fighting the Cold War secretly, and doing it at minimal cost by changing events sort of invisibly. And so it was seductive to them.

(01:19:08)
But everybody, Congress, when they first voted it in place, Congress, both political parties said, if we create this thing, it could turn into a monster and it could undermine our values. And today it’s so powerful, and then nobody knows what its budget is. Plus it has its own investment fund In-Q-Tel, which has invested, made I think, 2000 investments in Silicon Valley. So it has ownership of a lot of these tech companies, and a lot of the CEOs of those tech companies have signed state secrecy agreements with the CIA, which if they even reveal that they have signed that, they can go to jail for 20 years and have their assets removed, et cetera. The influence that the agency has, the capacity to influence events at every level in our country, is really frightening.

CIA influence


(01:20:03)
And then for most of its life, the CIA was banned from propagandizing Americans, but we learned that they were doing it anyway. So in 1973, during the Church Committee hearings, we’ve learned that the CIA had a program called Operation Mockingbird, where they had at least 400 members, leading members of the United States press corps, on the New York Times, the Washington Post, ABC, CBS, NBC, et cetera, who were secretly working for the agency, and steering news coverage to support CIA priorities. And they agreed at that time to disband Operation Mockingbird in ’73. But there’s indications they didn’t do that.

(01:20:56)
And they still, the CIA today, is the biggest funder of journalism around the world. The biggest funder is through USAID. The United States funds journalism in almost every country in the world. It owns newspapers, it has journalists on it, thousands and thousands of journalists, on its payroll. They’re not supposed to be doing that in the United States. But in 2016, president Obama changed the law to make it legal now for the CIA to propagandize Americans. And I think, we can’t look at the Ukraine War and how the narrative has been formed in the minds of Americans, and say that the CIA had nothing to do with that.
Lex Fridman
(01:21:46)
Well, what is the mechanism by which the CIA influences the narrative? Do you think it’s indirectly?
Robert F. Kennedy Jr
(01:21:51)
Through the press.
Lex Fridman
(01:21:52)
Indirectly through the press, or directly by funding the press?
Robert F. Kennedy Jr
(01:21:55)
Directly through. I mean, there’s certain press organs that have been linked to the agency, that the people who run those organs, things like the Daily Beast, now Rolling Stone, editor of Rolling Stone, Noah Shachtman, has deep relationships with the intelligence community, Salon, Daily Kos.
Lex Fridman
(01:22:19)
But I wonder why they would do it. From my perspective, it just seems like the job of a journalist is to have an integrity where your opinion cannot be influenced or bought.
Robert F. Kennedy Jr
(01:22:30)
I agree with you, but I actually think that the entire field of journalism has really shamed itself in recent years, because it’s become, the principle newspapers in this country and the television stations, the legacy media, have abandoned their tradition of… When I was a kid, listen, my house was filled with the greatest journalists alive at that time, people like Ben Bradley, like Anthony Lewis, Mary McGrory, Pete Hamil, Jack Newfield, Jimmy Breslin, and many, many others. And after my father died, they started the RFK Journalism Awards to recognize integrity and courage, journalistic integrity and courage. And for that generation of journalism, they thought, they believed that the function of journalists, was to maintain this posture of fear-skepticism toward any aggregation of power, including government authority, that people in authority lie, and that they always have to be questioned, and that their job was to speak truth to power, and to be guardians of the First Amendment to free expression.

(01:23:57)
But if you look what happened during the pandemic, was the inverse of that kind of journalism, where the major press organs in this country were, instead of speaking truth to power, they were doing the opposite. They were broadcasting propaganda. They became propaganda organs for the government agencies. And they were actually censoring the speech of dissent, anybody who dissent, of the powerless. And in fact, it was an organized conspiracy, and the name of it was the Trusted News Initiative. And some of the major press organs in our country signed onto it, and they agreed not to print stories or facts, that departed from government orthodoxy. So the Washington Post was the signature of the UPI, the AP, and then the four social media groups, Microsoft, Twitter, Facebook, and Google, all signed on to the Trusted News Initiative.

(01:24:59)
It was started by the BBC, organized by them. And the purpose of it, was to make sure nobody could print anything about government that departed from governmental orthodox. And the way it worked is, the UPI, the AP, which are the news services that provide most of the news news around the country, and the Washington Post, would decide what news was permissible to print. And a lot of it was about COVID, but also Hunter Biden’s laptops, it was impermissible to suggest that those were real, or that they had stuff on there that was compromising.

(01:25:39)
And by the way, what I’m telling you is all well documented, and I’m litigating on it right now, so I’m part of a lawsuit against the TNI, and so I know a lot about what happened, and I have all this documented and people can go to our website. There’s a letter on my sub-stack now, to Michael Scherer of the Washington Post that outlines all this, and gives all my sources, because Michael Scherer accused me of being a conspiracy theorist, when he was actually part of a conspiracy, a true conspiracy, to suppress anybody who is departing from government orthodoxies, by either censoring them completely, or labeling them conspiracy theorists.
Lex Fridman
(01:26:26)
I mean, you can understand the intention and the action, the difference between as we talked about, you can understand the intention of such a thing being good, that in a time of a catastrophe, in a time of a pandemic, there’s a lot of risk to saying untrue things. But that’s a slippery slope that leads into a place where the journalistic integrity that we talked about, is completely sacrificed, and then you can deviate from truth.
Robert F. Kennedy Jr
(01:26:54)
If you read their internal memorandum, including the statements of the leader of the Trusted News Initiative, I think her name’s Jessica, Jennifer, Cecil and you can go on our website and see her statement. She says, the purpose of this is that we’re now… Actually, she says, when people look at the us, they think we’re competitors, but we’re not. The real competitors are coming from all these alternative news sources now all over the network, and they’re hurting public trust in us, and they’re hurting our economic model, and they have to be choked off and crushed. And the way that we’re going to do that, is to make an agreement with the social media sites, that if we say, if we label their information misinformation, the social media sites will de platform it, or they will throttle it, or they will shadow-ban it, which destroys the economic model of those alternative, competitive sources of information. So that that’s true.

(01:27:58)
But the point you make, is an important point. That the journalists themselves, who probably didn’t know about the TNI agreement, certainly I’m sure they didn’t, they believe that they’re doing the right thing by suppressing information that may challenge government proclamations on COVID. But I mean, there’s a danger to that. And the danger is that, once you appoint yourself an arbiter of what’s true and what’s not true, then there’s really no end to the power that you have now assumed for yourself, because now your job is no longer to inform the public. Your job now is to manipulate the public. And if you end up manipulating the public in collusion with powerful entities, then you become the instrument of authoritarian rule, rather than the opponent of it. And it becomes the inverse of journalism and a democracy.

2024 elections

Lex Fridman
(01:29:05)
You’re running for president as a Democrat, what to you are the strongest values that represent the left-wing politics of this country?
Robert F. Kennedy Jr
(01:29:18)
I would say protection of the environment, and the commons, the air, the water, wildlife, fisheries, public lands, those assets, they cannot be reduced to private property ownership, the landscapes, our purple mountain majesty, the protection of the most vulnerable people in our society, people which would include children and minorities, the restoration of the middle class, and protection of labor, dignity, and decent pay for labor, bodily autonomy, a woman’s right to-
Robert F. Kennedy Jr
(01:30:03)
… bodily autonomy, a woman’s right to choose or an individual’s right to endure unwanted medical procedures. Peace. The Democrats have always been anti-war. The refusal to use fear is a governing tool. FDR said, “The only thing we have to fear is fear itself,” because he recognized that tyrants and dictators could use fear to disable critical thinking and overwhelm the desire for personal liberty. The freedom of government from untoward influenced by corrupt corporate power. The end of this corrupt merger of state and corporate power that is now I think, dominating our democracy. It’s what Eisenhower warned about when he warned against the emergence of the military industrial complex.

(01:31:07)
And then I prefer to talk about the positive vision of what we should be doing in our country and globally, which is I see that the corporations are commoditizing us are poisoning our children, are strip mining the wealth from our middle class and treating America as if it were business in liquidation, converting assets to cash as quickly as possible and creating or exacerbating this huge disparity in wealth in our country, which is eliminating the middle class and creating a Latin American style futile model. There’s these huge aggregations of wealth above and widespread poverty below, and that’s a configuration that is too unstable to support democracy sustainably. And we’re supposed to be modeling democracy, but we’re losing it.

(01:32:11)
And I think we have ought to have a foreign policy that restores our moral authority around the world. Restores America as the embodiment of moral authority, which it was when my uncle was president. And as a purveyor of peace rather than a war-like nation. My uncle said he didn’t want people in Africa and Latin America and Asia when they think of America to picture a man with a gun and a bayonet. He wanted them to think of a Peace Corps volunteer, and he refused to send combat soldiers abroad. He never sent a single soldier to his death abroad and into combat. He sent 16,000. He resisted in Berlin in ’62. He resisted in Laos in ’61. He resisted in Vietnam. Vietnam, they wanted him to put 250,000 troops. He only put 16,000 advisors, which was fewer troops.

(01:33:22)
And he sent to get James Meredith into the universe to Ole Miss in Oxford, Mississippi. One black man, he sent 16,000. And month before he died, he ordered them all home. I think it was October 2nd, 1963, he heard that a Green Beret had died. And he asked his aid for a list of combat fatalities. And the aid came back and there was 75 men had died in Vietnam at that point. And he said, “That’s too many. We’re going to have no more.” And he signed a national security order, 263, and ordered all of those men, all Americans, home from Vietnam by 1965 with the first thousand coming home by December ’63.

(01:34:13)
And then in November he, of course, just before that evacuation began, he was killed. And a week later, president Johnson remanded that order. And then a year after that, the Tonkin Gulf resolution, we sent 250,000, which is what they wanted my uncle to do, which he refused. And it became an American war. And then Nixon topped it off at 560,000. 56,000 Americans never came home, including my cousin George Skakel, who died at the Tet Offensive. And we killed a million Vietnamese and we got nothing for it.
Lex Fridman
(01:34:51)
So America should be the symbol of peace?
Robert F. Kennedy Jr
(01:34:57)
My uncle really focused on putting America on the side of the poor, instead of our tradition of fortifying oligarchies that were anti-communism. That was our major criteria. If you said you were against communists, and of course the people were with the rich people, our aid was going to the rich people in those countries and they were going to the military juntas. Our weapons were going to the juntas to fight against the poor. And my uncle said, “No, America should be on the side of the porn.” And so he launched the Alliance for Progress and USAID, which were intended to bring aid to the poorest people and those, and build middle classes, and take ourselves away.

(01:35:42)
In fact, his two favorite trips while he was president. His most favorite trip was to Ireland, this incredible, emotional homecoming for all of the people of Ireland. But his second favorite trip was when he went to Colombia, he went to Latin America, but Colombia was his favorite country. And I think there were 2 million people came into Bogota to see him, this vast crowd. And they were just delirious cheering for him. And the president of Columbia, Lleras Camargo, said to him, “Do you know why they love you?”

(01:36:22)
And my uncle said, “Why?”

(01:36:24)
And he said, “Because they think you’ve put America on the side of the poor against the oligarchs.” And my uncle, after he died, today, there are more avenues and boulevards and hospitals and schools and statues and parks commemorating John Kennedy in Africa and Latin America than any other president in the United States, and probably more than all the other presidents combined. And it’s because he put America on the side of the poor. And that’s what we ought to be doing.

(01:37:01)
We ought to be projecting economic power abroad. The Chinese have essentially stolen his playbook and we’ve spent $8 trillion on the Iraq war and its aftermath. The wars in Syria, Yemen, Libya, Afghanistan, Pakistan. And what do we get for that? We got nothing for that money. $8 trillion. We killed more Iraqis than Saddam Hussein. Iraq today is much worse off than it was when Saddam was there. It’s an incoherent, violent war between Shia and Sunni death squads. We pushed Iraq into the embrace of Iran, which now become essentially a proxy for Iran, which is exactly the outcome that we were trying to prevent for the past 20 or 30 years.

(01:37:53)
We created ISIS, we sent 2 million refugees into Europe, destabilizing all of the nations in Europe for generations. And we’re now seeing these riots in France, and that’s a direct result from the Syrian war that we created and our creation of ISIS. Brexit is another result of that. So for $8 trillion, we wrecked the world. And during that same period that we spent $8.1 trillion bombing bridges, ports, schools, hospitals, the Chinese spent 8.1 trillion building schools, ports, hospitals, bridges, and universities.

(01:38:42)
And now the Chinese are out-competing us everywhere in the world. Everybody wants to deal with the Chinese because they come in, they build nice things for you, and there’s no strings attached and they’re pleasant to deal with. And as a result of that, Brazil is switching the Chinese currency. Argentina is switching. Saudi Arabia, our greatest partner that we put trillions of dollars into protecting our oil pipelines there. And now they’re saying, “We don’t care what the United States think.” That’s what Mohammed bin Salman said.

(01:39:24)
He dropped oil production in Saudi Arabia in the middle of a US inflation spiral. They’ve never done that to us before, to aggravate the inflation spiral. And then they signed a deal, a unilateral peace deal with Iran, which has been the enemy that we’ve been telling them to be a bulwark against for 20 years. And two weeks after that, he said, “We don’t care what the United States thinks anymore.” So that’s what we got for spending all those trillions of dollars there. We got short term friends. And we have not made ourselves safer. We’ve put Americans in more jeopardy all over the world. You have to wait in lines to get through the airport. The security state is now causing us $1.3 trillion, and America is unsafer and poorer than it’s ever been. So we should be doing what President Kennedy said we ought to do, and the policy that China has now adopted.

Jordan Peterson

Lex Fridman
(01:40:37)
So that’s a really eloquent and clear and powerful description of the way you see US should be doing geopolitics and the way you see US should be taking care of the poor in this country. Let me ask you a question from Jordan Peterson that he asked when I told him that I’m speaking with you. “Given everything you’ve said, when does the left go too far?” I suppose he’s referring to cultural issues, identity politics.
Robert F. Kennedy Jr
(01:41:10)
Well, Jordan trying to get me to badmouth the left the whole time I was in, I really enjoyed my talk with him, but he seemed to have that agenda where he wanted me to say bad things about the left and that’s not what my campaign is about. I want to do the opposite. I’m not going to badmouth the left. I was on shows this week with David Remnick from the New Yorker, and he tried to get me to badmouth Donald Trump and Alex Jones and a lot of other people, and baiting me to do it. And of course there’s a lot of bad things I could say about all those people, but I’m trying to find values that hold us together and we can share in common, rather than to focus constantly on these disputes and these issues that drive us apart.

(01:42:07)
So me sitting here badmouthing the left or badmouthing the right is not going to advance the ball. I really want to figure out ways that what do these groups hold in common that we can all have a shared vision of what we want this country to look like.

Anthony Fauci

Lex Fridman
(01:42:25)
Well, that’s music to my ears. But in that spirit, let me ask you a difficult question then. You wrote a book harshly criticizing Anthony Fauci. Let me ask you to steelman the case for the people who support him. What is the biggest positive thing you think Anthony Fauci did for the world? What is good that he has done for the world, especially during this pandemic?
Robert F. Kennedy Jr
(01:42:48)
I don’t want to sit here and speak unfairly by saying the guy didn’t do anything, but I can’t think of anything. If you tell me something that you think he did, maybe there was a drug that got licensed while he was at NIH that benefited people, that’s certainly possible. He was there for 50 years. And in terms of his principle programs of the AIDS programs and his COVID programs, I think that the harm that he did vastly outweighed the benefits.
Lex Fridman
(01:43:29)
Do you think he believes he’s doing good for the world?
Robert F. Kennedy Jr
(01:43:31)
I don’t know what he believes. In fact, in that book, which is I think 250,000 words, I never try to look inside of his head. I deal with facts. I deal with science and every factual assertion in that book is cited in source to government databases or peer reviewed publications. And I try not to speculate about things that I don’t know about or I can’t prove. And I cannot tell you what his motivations were. He’s done a lot of things that I think are really very, very bad things for humanity and very deceptive. But we all have this capacity for self-deception. As I said at the beginning of this podcast, we judge ourselves on our intentions rather than our actions. And we all have an almost infinite capacity to convince ourselves that what we’re doing is right. And not everybody lives an examined life. And it is examining their motivations and the way that the world might experience their professions of goodness.
Lex Fridman
(01:44:45)
Let me ask about the difficulty of the job he had. Do you think it’s possible to do that kind of job well or is it also a fundamental flaw of the job, of being the central centralized figure that’s supposed to have a scientific policy?
Robert F. Kennedy Jr
(01:44:58)
No. No. I think he was a genuinely bad human being. And that there were many, many good people in that department over the years. Bernice Eddy is a really good example. John Anthony Morris. Many people whose careers he destroyed because they were trying to tell the truth. One after the other, the greatest scientists in the history of NIH were run out of that agency. But people listening to this, probably will, in hearing me say that, will think that I’m bitter or that I’m doctrinaire about him, but you should really go and read my book. And it’s hard to summarize. I try to be really methodical, to not call names, to just say what happened.

Big Pharma

Lex Fridman
(01:45:57)
The bigger picture of this is you’re an outspoken critic of pharmaceutical companies, big pharma. What is the biggest problem with big pharma and how can it be fixed?
Robert F. Kennedy Jr
(01:46:07)
Well, the problem could be fixed with regulation. But the pharmaceutical industry is… I don’t want to say because this is going to seem extreme that a criminal enterprise, but if you look at the history, that is an applicable characterization, for example, the four biggest vaccine makers, Sanofi, Merck, Pfizer, and Glaxo, four companies that make all of the 72 vaccines that are now effectively mandated for American children. Collectively, those companies have paid $35 billion in criminal penalties and damages in the last decade. And I think since 2000, about 79 billion. So these are the most corrupt companies in the world.

(01:47:08)
And the problem is that they’re serial felons. They do this again and again and again. So Merck did Vioxx, which, Vioxx, they killed people by falsifying science. And they did it. They lied to the public. They said, “This is a headache medicine and a arthritis painkiller.” But they didn’t tell people that it also gave you heart attacks.

(01:47:37)
And they knew, we’ve found when we sued them, the memos from their bean counters saying, “We’re going to kill this many people, but we’re still going to make money.” So they make those calculations and those calculations are made very, very regularly. And then when they get caught, they pay a penalty. And I think they paid about $7 billion for Vioxx. But then they went right back that same year that they paid that penalty, they went back into the same thing again with Gardasil and with a whole lot of other drugs. So the way that the system is set up, the way that it’s sold to doctors, the way that nobody ever goes to jail, so there’s really no penalty that it all becomes part of the cost of doing business.

(01:48:32)
And you can see other businesses that if there’s no penalty, if there’s no real… look, these are the companies that gave us the opioid epidemic. So they knew what was going to happen. And you go and see, there’s a documentary, I forget what the name of it is, but it shows exactly what happened. And they corrupted FDA. They knew that oxycodone was addictive. They got FDA to tell doctors that it wasn’t addictive. They pressured FDA to lie. And they got their way. And so far they got a whole generation addicted oxycodone. And now when they got caught, and we made it harder to get oxycodone, and now all those addicted kids are going to fentanyl and dying. And this year it killed 106,000. That’s twice as many people who were killed during the 20-year Vietnam War. But in one year, twice as many American kids. And they knew it was going to happen and they did it to make money. So I don’t know what you call that other than saying that’s a criminal enterprise.
Lex Fridman
(01:49:47)
Well, is it possible, within a capitalist system, to produce medication, to produce drugs at scale in a way that is not corrupt?
Robert F. Kennedy Jr
(01:49:57)
Of course it is.
Lex Fridman
(01:49:58)
How?
Robert F. Kennedy Jr
(01:50:00)
Through a solid regulatory regimen, where drugs are actually tested. The problem is not the capitalist system. The capitalist system, I have great admiration for and love for the capitalist system. It’s the greatest economic engine ever devised. But it has to be harnessed to a social purpose. Otherwise, it leads us down a trail of oligarchy, environmental destruction, and commoditizing poisoning and killing human beings. That’s what it will do. And in the end, you need a regulatory structure that is not corrupted by entanglements, financial entanglements with the industry. And we’ve set this up. The way that the system is set up today has created this system of regulatory capture on steroids.

(01:51:06)
So almost 50% of FDA’s budget comes from pharmaceutical companies. The people who work at FDA are, their salaries are coming from pharma, half their salaries. So they know who their bosses are. And that means getting those drugs done, getting them out the door and approved as quickly as possible. It’s called fast track approval. 50% of FDA’s budget, about 45%, actually goes to fast track approval.
Lex Fridman
(01:51:38)
Do you think money can buy integrity?
Robert F. Kennedy Jr
(01:51:40)
Oh yeah, of course it can. That’s not something that is controversial. Of course it will.
Lex Fridman
(01:51:48)
It’s slightly controversial to me. I would like to think that scientist that work at the FDA-
Robert F. Kennedy Jr
(01:51:53)
Well, it may not be able to buy your integrity. I’m talking about population wide, I’m not talking about the individual.
Lex Fridman
(01:51:58)
But I’d like to believe that in general, a career of a scientist is not a very high paying job. I’d like to believe that people that go into science, that work at FDA, that work at NIH are doing it for a reason that’s not even correlated with money, really.
Robert F. Kennedy Jr
(01:52:18)
Yt. And I think probably that’s why they go in there. But scientists are corruptible. And the way that I can tell you that is that I’ve brought over 500 losses and almost all of them involve scientific controversies. And there are scientists on both sides in every one. And when we sued Monsanto, on the Monsanto side, there was a Yale scientist, a Stanford scientist, and a Harvard scientist. And on our side there was a Yale, Stanford and Harvard scientist. And they were saying exactly the opposite things. In fact, there’s a word for those kind of scientists who take money for their opinion, and the word is biostitutes. And they are very, very common. And I’ve been dealing them with them my whole career.

(01:53:05)
I think it was Upton Sinclair, that it’s very difficult to persuade a man of a fact if the existence of that fact will diminish his salary. And I think that’s true for all of us. If we find a way of reconciling ourselves, to truths and worldviews that actually benefit our salaries. Now, NIH has probably the worst system, which is that scientists who work for NIH itself, which used to be the premier gold standard scientific agency in the world, everybody looked at NIH as that. Today, it’s just an incubator for pharmaceutical drugs. And that is that gravity of economic self-interest.

(01:53:58)
Because if NIH itself collects royalties, they have margin rights for the patents on all the drugs that they work on. So with the Moderna vaccine, which they promoted incessantly and aggressively, NIH on 50% of that vaccine is making billions and billions of dollars on it. And there are at least four scientists that we know of, and probably at least six at NIH, who themselves have marching rights for those patents. So if you are a scientist who work at NIH, you work on a new drug, you then get marching rights and you’re entitled to royalties of $150,000 a year forever from that forever. Your children, your children’s children. As long as that product’s on the market, you can collect royalties.

(01:54:46)
Moderna vaccine is paying for the top people at NIH. Some of the top regulators. It’s paying for their boats, it’s paying for their mortgages, it’s paying for their children’s education. And you have to expect that in those kind of situations, the regulatory function would be subsumed beneath the mercantile ambitions of the agency itself and the individuals who stand to profit enormously from getting a drug to market. Those guys are paid by us, the taxpayer, to find problems with those drugs before they get to market. But if you know that drug is going to pay for your mortgage, you may overlook a little problem or even a very big one. And that’s the problem.
Lex Fridman
(01:55:38)
You’ve talked about that the media slanders you by calling you an anti-vaxxer, and you’ve said that you’re not anti-vaccine, you’re pro safe vaccine. Difficult question, can you name any vaccines that you think are good?
Robert F. Kennedy Jr
(01:55:55)
I think some of the live virus vaccines are probably averting more problems than they’re causing. There’s no vaccine that is safe and effective. In fact-
Lex Fridman
(01:56:09)
Those are big words.
Robert F. Kennedy Jr
(01:56:09)
… Those are big words.
Lex Fridman
(01:56:10)
What about the polio? Let’s start with the-
Robert F. Kennedy Jr
(01:56:11)
Well, here’s the problem. Here’s the problem. Yeah, here’s the problem. The polio vaccine contained a virus called simian virus 40. SV40. It’s one of the most carcinogenic materials that is known to man. In fact, it’s used now by scientists around the world to induce tumors and rats and Guinea pigs in labs. But it was in that vaccine, 98 million people who got that vaccine. And my generation got it. And now you’ve had this explosion of soft tissue cancers in our generation that killed many, many, many more people than polio ever did. So if you say to me, “The polio vaccine, was it effective against polio?”

(01:56:55)
I’m going to say, “Yes.”

(01:56:57)
And if say to me, “Did it cause more death than avert?”

(01:57:02)
I would say, “I don’t know, because we don’t have the data on that.”
Lex Fridman
(01:57:06)
But let’s talk. We have to narrow in on what is it effective against the thing it’s supposed to fight?
Robert F. Kennedy Jr
(01:57:12)
Oh, well, a lot of them are, let me give you an example. The most popular vaccine in the world is the DTP vaccine. Diphtheria, tetanus and pertussis. It was introduced in this country around 1980. That vaccine caused so many injuries that Wyeth, which was the manufacturer, said to the Reagan administration, “We are now paying $20 in downstream liabilities for every dollar that we’re making in profits, and we are getting out of the business unless you give us permanent immunity from liability.”

(01:57:45)
And by the way, Reagan said at that time, “Why don’t you just make the vaccine safe?” And why is that? Because vaccines are inherently unsafe.

(01:57:58)
They said, “Unavoidably unsafe, you cannot make them safe.”

(01:58:02)
And so when Reagan wrote the bill and passed it, the bill says in its preambles, “Because vaccines are unavoidably unsafe.” And the Bruesewitz case, which was the Supreme Court case that upheld that bill uses that same language, vaccines cannot be made safe. They’re unavoidably unsafe. So this is what the law says.

(01:58:21)
Now, I just want to finish this story because this illustrates very well your question. The DTP vaccine was discontinued in this country and it was discontinued in Europe because so many kids were being injured by it. However, the WHO and Bill Gates gives it to 161 million African children every year. And Bill Gates went to the Danish government and asked them to support this program saying, “We’ve saved 30 million kids from dying from diptheria, tetanus and pertussis.”

(01:58:59)
The Danish government said, “Can you show us the data?” And he couldn’t. So the Danish government paid for a big study with Novo Nordisk, which is a Scandinavian vaccine company in West Africa. And they went to West Africa and they looked at the DTP vaccine for 30 years of data and they retained the best vaccine scientists in the world, these deities of African vaccine program. Peter Aaby, Sigrid Morganson, and a bunch of others. And they looked at 30 years of data for the DTP vaccine. And they came back and they were shocked by what they found.

(01:59:36)
They found that the vaccine was preventing kids from getting diptheria, tetanus and pertussis. But the girls who got that vaccine were 10 times more likely to die over the next six months than children who didn’t. Why is that? And they weren’t dying from anything anybody ever associated with the vaccine. They were dying of anemia, bilharzia, malaria, sepsis, and mainly pulmonary and respiratory disease, pneumonia.
Robert F. Kennedy Jr
(02:00:02)
Mainly pulmonary and respiratory disease, pneumonia. And it turns out that this is what research has found who were all pro-vaccine, by the way. They said that this vaccine is killing more children and than did their attendance and protected prior to the introduction of the vaccine and for 30 years nobody ever noticed it. The vaccine was providing protection against those target illnesses, but it had ruined the children’s immune systems. And they could not defend themselves against random infections that were harmless to most children.
Lex Fridman
(02:00:36)
But isn’t it nearly impossible to prove that link?
Robert F. Kennedy Jr
(02:00:39)
You can’t prove the link, all you can do is for any particular interest, illness or death, you can’t prove the link. But you can show statistically that if you get that vaccine, you’re more likely to die over the next six months than if you don’t. And those studies unfortunately are not done for any other vaccines. So for every other medicine, in order to get approval from the FDA, you have to do a placebo controlled trial prior to licensure, where you look at health outcomes among an exposed group, a group that gets it and compare those to a similarly situated group that gets placebo. The only medical intervention that does not receive, that does not undergo placebo controlled trials prior to licensure are vaccines. Not one of the 72 vaccines that are now mandated for our children have ever undergone a placebo controlled trial prior to licensure.
Lex Fridman
(02:01:38)
So I should say on that point, I’ve heard from a bunch of folks that disagree with you.
Robert F. Kennedy Jr
(02:01:44)
Okay.
Lex Fridman
(02:01:44)
Including polio. I mean, the testing is a really important point. Before licensure, placebo controlled randomized trials, polio received just that against the saline placebo control. So I’m confused why you say that they don’t go through that process. It seems like a lot of them do.
Robert F. Kennedy Jr
(02:02:10)
Here’s the thing is that I was saying that for many years because we couldn’t find any. And then in 2016, in March, President Trump ordered Dr. Fauci to meet with me. Dr. Fauci and Francis Collins, and I said to them during that meeting, “You have been saying that I’m not telling the truth when I said not one of these has undergone a prior pre-licensure placebo control.” And the polio may have had one post licensing, most of them haven’t. The polio may have, I don’t know. But I said, “Our question was prior to licensure, do you ever test these? And for safety?” And by the way, I think the polio vaccine did undergo a saline placebo trial prior licensure, but not for safety, only for efficacy. So I’m talking about safety trials now. Fauci told me, he had a whole tray of files there. He said, “I can’t find one now, but I’ll send you one.”

(02:03:26)
I said, “Just for any vaccines, send me one. Any of the 72 vaccines,” He never did. So we sued the HHS and after a year of stonewalling us, HHS came back and they gave us a letter saying we have no pre-licensing safety trial for any of the 72 vaccines. And that the letter from HHS, which settled our lawsuit against them because we had a FOIA lawsuit against them, is posted on CHD’s website. So anybody can go look at it. So if HHS had any study, I assume they would’ve given it to us and they can’t find one.
Lex Fridman
(02:04:08)
Well, let me zoom out because a lot of the details matter here, pre-licensure, what does placebo controlled mean? So this probably requires a rigorous analysis. And actually, at this point, it would be nice for me just to give the shout-out to other people much smarter than me that people should follow along with Robert F. Kennedy Jr, use their mind, learn and think. So one really awesome creator, I really recommend him is Dr. Dan Wilson. He hosts the Debunk the Funk Podcast. Vincent Racaniello, who hosts This Week in Virology. Brilliant guy, I’ve had him on the podcast. Somebody you’ve been battling with is Paul Offit, interesting Twitter, interesting books. People should, understand and read your books as well. And Eric Topol has a good Twitter and good books. And even Peter Hotez, I’ll ask you about him.
Robert F. Kennedy Jr
(02:05:03)
And people should, because Paul Offit published a substack recently debunking, I think my discussion with Joe Rogan. And we have published a debunk of his debunking. So if you read his stuff, you should read-
Lex Fridman
(02:05:29)
Read both.
Robert F. Kennedy Jr
(02:05:30)
Yes, you should read… And I would love to debate any of these guys.

Peter Hotez

Lex Fridman
(02:05:37)
So Joe Rogan proposed just such a debate, which is quite fascinating to see how much attention and how much funding it garnered the debate between you and Peter Hotez. Why do you think Peter rejected the offer?
Robert F. Kennedy Jr
(02:05:51)
I think, again, I’m not going to look into his head, but what I will say is if you’re a scientist and you’re making public recommendations based upon what you say is evidence-based science, you ought to be able to defend that. You ought to be able to defend it in a public forum and you ought to be able to defend it against all comers. So if you’re a scientist, science is rooted in logic and reason. And if you can’t use logic and reason to defend your position, and by the way, I know almost all of the studies, I’ve written books on them and we’ve made a big effort to assemble all the studies on both sides. And so, I’m prepared to talk about those studies and I’m prepared to submit them in advance and for each of the points. And by the way, I’ve done that with Peter Hotez, actually because I had this kind of informal debate several years ago with him, with a referee at that time.

(02:07:02)
And we were debating not only by phone but by email and on those emails, every point that he would make, I would cite science and he could never come back with science. He could never come back with publications. He would give publications that had nothing to do with, for example, thimerosal and vaccines, mercury based vaccines. He sent me one time, 16 studies to rebut something I’d said about thimerosal. And not one of those studies, they were all about the MMR vaccine, which doesn’t contain thimerosal. So it wasn’t like a real debate where you’re using reason and isolating points and having a rational discourse. I don’t blame him for not debating me because I don’t think he has the science.
Lex Fridman
(02:07:53)
Are there aspects of all the work you’ve done on vaccines, all the advocacy you’ve done, that you found out that you were not correct on, that you were wrong on, that you’ve changed your mind on?
Robert F. Kennedy Jr
(02:08:09)
Yeah, there are many times over time that I found that I’ve made mistakes and we correct those mistakes. I run a big organization and I do a lot of tweets. I’m very careful. For example, my Instagram, I was taken down for misinformation, but there was no misinformation on my Instagram. Everything that I cited on Instagram was cited or sourced to a government database or to peer reviewed science. But for example, the Defender, which was our organization’s newsletter, we summarized scientific reports all the time. That’s one of the things, the services that we provide. So we watch the PubMed and we watch the peer reviewed publications and we summarize them when they come out, we have made mistakes. When we make mistake, we are rigorous about acknowledging it, apologizing for it, and changing it. That’s what we do. I think we have one of the most robust fact checking operations anywhere in journalism today.

(02:09:09)
We actually do real science. And listen, I’ve put up on my Twitter account where there are numerous times that I’ve made mistakes on Twitter and I apologize for it. And people say to me, “Oh, that’s weird. I’ve never seen anybody apologize on Twitter.” But I think it’s really important at the only… Of course, human beings make mistakes. My book is 230 or 40, 50,000 words. There’s going to be a mistake in there. But you know what I say at the beginning of the book, “If you see a mistake in here, please notify me. I give away that people can notify me.” And if somebody points out a mistake, I’m going to change it. I’m not going to dig my feet in and say, “I’m not going to acknowledge this.”
Lex Fridman
(02:09:57)
So some of the things we’ve been talking about, you’ve been an outspoken contrarian on some very controversial topics. This has garnered some fame and recognition in part for being attacked and standing strong against those attacks. If I may say, for being a martyr, do you worry about this drug of martyrdom that might cloud your judgment?
Robert F. Kennedy Jr
(02:10:22)
First of all, I don’t consider myself a martyr and I’ve never considered myself a victim. I make choices about my life and I’m content with those choices and peaceful with them. I’m not trying to be a martyr or a hero or anything else. I’m doing what I think is right because I want to be peaceful inside of myself, but the only guard I have is fact-based reality. If you show me a scientific study that shows that I’m wrong, for example, if you come back and say, “Look, Bobby, here’s a safety study on polio that was done pre-licensure and used a real saline solution.” I’m going to put that on my Twitter and I’m going to say, “I was wrong, there is one out there.” But that’s all I can do.

Exercise and diet

Lex Fridman
(02:11:17)
All right. I have to ask, you are in great shape. Can you go through your diet and exercise routine?
Robert F. Kennedy Jr
(02:11:28)
I do intermittent fasting. So I start my first meal at around noon, and then I try to stop eating at six or seven. And then I hike every day.
Lex Fridman
(02:11:46)
Morning, evening?
Robert F. Kennedy Jr
(02:11:47)
In the morning. I go to a meeting first thing in the morning, 12, I’m eating. And then I hike uphill for a mile and a half up and a mile half down with my dogs and I do my meditations. And then I go to the gym and I go to the gym for 35 minutes. I do it short time and I’ve been exercising for 50 years. And what I’ve found is it’s sustainable if I do just the short periods and I do four different routines at the gym. And I never relax at the gym, I go in there and I have a very intense exercise. I lift, I mean, I could tell you what my routine is, but I do backs just one day, legs and then a miscellaneous. And I do 12.

(02:12:36)
My first set of everything is I try to reach failure at 12 reps. And then my fourth set of everything is a strip set. I take a lot of vitamins. I can’t even list them to you here because I couldn’t even remember them at all. But I take a ton of vitamins and nutrients, I’m on an anti-aging protocol from my doctor that includes testosterone replacement. But I don’t take any anabolic steroids or anything like that. And the DRT I use is bioidentical to what my body produced.
Lex Fridman
(02:13:25)
What are your thoughts on hormone therapy in general?
Robert F. Kennedy Jr
(02:13:29)
I talk to a lot of doctors about that stuff because I’m interested in health and I’ve heard really good things about it, but I’m definitely not an expert on it.

God

Lex Fridman
(02:13:42)
About God. You wrote, “God talks to human beings through many vectors, wise people, organized religion, the great books of religions, through art, music and poetry. But nowhere with such detail and grace and joy as through creation. When we destroy nature, we diminish our capacity to sense the divine.” What is your relationship and what is your understanding of God? Who is God?
Robert F. Kennedy Jr
(02:14:09)
Well, God is incomprehensible. I mean, I guess, most philosophers would say we’re inside the mind of God. And so, it would be impossible for us to understand what’s actually God’s form is. But I mean, for me, let’s say this, when I was raised in a very deeply religious setting, so we went to church in the summer, oftentimes twice a day, morning mass. And we definitely went every Sunday. And we prayed in the morning, we prayed before and after every meal, we prayed at night, we sent a rosary, sometimes three rosaries a night. And my father read us the Bible. Whenever he was a home, we’d all get in the bed and he’d read us the Bible stories. And I went to Catholic schools, I went to Jesuit schools, I went to the nuns and I went to a Quaker school at one point. I became a drug addict when I was about 15 years old, about a year after my dad died. And I was addicted to drugs for 14 years.

(02:15:32)
During that time, when you’re an addict, you’re living against conscience. And when you’re living against… I was always trying to get off of drugs, never able to. But I never felt good about what I was doing. And when you’re living against conscious, you kind of push God to the periphery of your life. So I’ll call Him, he gets recedes and gets smaller. And then when I got sober, I knew that I had a couple of experiences. One is that I had a friend of my brothers, one of my brothers who died of this disease of addiction, had a good friend who used to take drugs with us and he became a Moonie. So he became a follower of Reverend Sun Myung Moon. And at that point, he had the same kind of compulsion that I had and yet it was completely removed from him.

(02:16:41)
And he used to come and hang out with us, but he would not want to take drugs. Even if I was taking them right in front of him, he was immune to it. He’d become impervious to that impulse. And when I first got sober, I knew that I did not want to be the kind of person who was waking up every day in white knuckling sobriety and just trying to resist through willpower. And by the way, I had iron willpower as a kid. I gave up candy for lent when I was 12 and I didn’t need it again until I was in college. I gave up desserts the next year for lent. And I didn’t ever eat another dessert till I was in college. And I was trying to bulk up for rugby and for sports. So I felt like I could do anything with my willpower. But somehow this particular thing, the addiction was completely impervious to it. And it was cunning, baffling, incomprehensible. I could not understand why I couldn’t just say no and then never do it again like I did with everything else.

(02:17:57)
And so, I was living against conscience and I thought about this guy and reflecting my own prejudices at that time in my life, I said to myself, I didn’t want to be like a drug addict who was wanting a drug all the time and just not being able to do it. I wanted to completely realign myself so that I was somebody who got up every day and just didn’t want to take drugs, never thought of them, kissed the wife and children and went to work and never thought about drugs the whole day. And I knew that people throughout history had done that. I’d read the lives of the saints. I knew St. Augustine had met a very dissolute youth and had this spiritual realignment transformation. I knew the same thing had happened to St. Paul at Damascus. The same thing had happened to St. Francis.

(02:18:55)
St. Francis also had a dissolute and fun-loving youth and had this deep spiritual realignment. And I knew that that happened to people throughout history. And I thought that’s what I needed, something like that. I had the example of this friend of mine and I used to think about him and I would think this again reflects the bias and probably the meanness of myself at that time. But I said, “I’d rather be dead than be a Moonie.” But I wish I somehow could distill that power that he got without becoming a religious nuisance. And at that time, I picked up a book by Carl Yung called Synchronicity and Yung, he was a psychiatrist, he was contemporary of Freud’s. Freud was his mentor, and Freud wanted him to be his replacement. But Freud was now out atheist and Yung was a deeply spiritual man.

(02:19:58)
He had these very intense and genuine spiritual experiences from when he was a little boy, from when he was three years old that he remembers biography is fascinating about him because he remembers them with such a detail. And he was interesting to me because he was very faithful scientist and I considered myself a science-based person from when I was little. And yet he had this spiritual dimension to him, which infused all of his thinking and really I think made him, branded his form of recovery or of treatment. And he thought that he had this experiment experience that he describes in this book where he ran one of the biggest sanitariums in Europe in Zurich. And he was sitting up on the third floor of this building and he’s talking to a patient who was describing her dream to him.

(02:21:01)
And the fulcrum of that dream was a scarab beetle, which was an insect that is very uncommon if at all in Northern Europe, but it’s a common figure in the iconography of Egypt and the hieroglyphics on the walls of the pyramids, etc. And while he was talking to her, he heard this bing, bing, bing on the window behind him and he didn’t want to turn around to take his attention off her. But finally, he does it in exasperation. He turns around, he throws up the window and a scarab beetle flies in and lands in his head and he shows it to the woman. And he says, “Is this what you was thinking of, this is what you were dreaming about.” And he was struck by that experience which was similar to other experiences he’s had like that. And that’s what synchronicity means, it’s an incident, not a coincidence.

(02:21:56)
And if you’re talking with somebody about somebody that you haven’t thought about in 20 years and that person calls on the phone, that’s synchronicity. And he believed it was a way that God intervened in our lives that broke all the rules of nature, that he had set up the rules of physics, the rules of mathematics, or to reach in and sort of tap us on the shoulder and say, “I’m here.” And so, he tried to reproduce that in a clinical setting and he would put one guy in one room and another guy in another room and have them flip cards and guess what the other guy had flipped. And he believed that if he could beat the laws of chance, laws of mathematics, that he would approve the existence of an unnatural law, a supernatural law. And that was the first step to proving the existence of a God.

(02:22:48)
He never succeeds in doing it. But he says in the book, “Even though I can’t prove using an empirical and scientific tools, the existence of a God, I can show through anecdotal evidence having seen thousands of patients come through with this institution, that people who believe in God get better faster and that the recovery is more enduring than people who don’t.” And for me, hearing that was more impactful than if he had claimed that he had proved the existence of God because I wouldn’t have believed that. But I was already at a mindset where I would’ve done anything I could to improve my chances of never having to take drugs again by even 1%. And if believing in God was going to help me, whether there’s a God up there or not, believing in one a self had the power to help me, I was going to do that.

(02:23:40)
So then the question is how do you start believing in something that you can’t see or smell or hear or touch or taste or acquire with your senses? And Yung provides the formula for that. And he says, “Act as if you fake it till you make it.” And so, that’s what I started doing. I just started pretending there was a God watching me all the time. And kind of life was a series of tests. And there was a bunch of moral decisions that I had to make every day. And these were all just little things that I did. But each one now for me had a moral dimension. Like when the alarm goes off, do I lay in bed for an extra 10 minutes with my indolent thoughts or do I jump right out of bed? Do I make my bed? Most important decision of the day.

(02:24:28)
Do I hang up the towels? When I go into the closet and pull out my blue jeans and a bunch of those wire hangers fall on the ground, do I shut the door and say, “I’m too important to do that. That’s somebody else’s job or not?” And so, do put the water in the ice tray before I put it in the freezer? Do I put the shopping cart back in the place that it’s supposed to go in the parking lot of the Safeway? And if I make a whole bunch of those choices that I maintain myself in a posture of surrender, which keeps me open to my higher power to my God. And when I do those things right, so much about addiction is about abuse of power, abuse of all of us have some power, whether it’s our good looks or whether it’s connections or education or family or whatever.

(02:25:33)
And there’s always a temptation to use those to fulfill self will. And the challenge is how do you use those always to serve instead God’s of will and the good of our community? And that to me, is kind of the struggle. But when I do that, I feel God’s power coming through me and that I can do things. I’m much more effective as a human being. That gnawing anxiety that I lived with for so many years and God, it’s gone and that I can put down the oars and hoist the sail and the wind takes me and I can see the evidence of it in my life. And the big thing, temptation for me is that when all these good things start happening in my life and the cash and prizes start flowing in, how do I maintain that posture of surrender? How do I stay surrender then when my inclination is to say to God, “Thanks God, I got it from here.” And drive the car off the cliff again.

(02:26:49)
And so, I had a spiritual awakening and my desire for drugs and alcohol was lifted miraculously. And to me, it was as much a miracle as if I’d been able to walk on water because I had tried everything earnestly, sincerely and honestly for a decade to try to stop and I could not do it under my own power. And then all of a sudden, it was lifted effortlessly. So I saw that early evidence of God in my life and of the power, and I see it now every day of my life.
Lex Fridman
(02:27:29)
So adding that moral dimension to all of your actions is how you were able to win that Kambu battle against the absurd.
Robert F. Kennedy Jr
(02:27:38)
Exactly.
Lex Fridman
(02:27:38)
Sisyphus with the Boulder.
Robert F. Kennedy Jr
(02:27:39)
It’s all the same thing. It’s the battle to just do the right thing.
Lex Fridman
(02:27:44)
Now Sisyphus was able to find somehow happiness. Yeah. Well, Bobby, thank you for the stroll through some of the most important moments in recent human history and for running for president. And thank you for talking today.
Robert F. Kennedy Jr
(02:27:59)
Thank you, Lex.
Lex Fridman
(02:28:01)
Thanks for listening to this conversation with Robert F. Kennedy Jr. To support this podcast, please check out our sponsors in the description. And now, let me leave you with some words from John F. Kennedy. “Let us not seek the Republican answer or the Democratic answer, but the right answer. Let us not seek to fix the blame for the past. Instead, let us accept our own responsibility for the future.” Thank you for listening and hope to see you next time.

Transcript for George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God | Lex Fridman Podcast #387

This is a transcript of Lex Fridman Podcast #387 with George Hotz.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Introduction

Lex Fridman
(00:00:00)
What possible ideas do you have for how human species ends?
George Hotz
(00:00:03)
Sure. I think the most obvious way to me is wire heading. We end up amusing ourselves to death. We end up all staring at that infinite TikTok and forgetting to eat. Maybe it’s even more benign than this. Maybe we all just stop reproducing. Now, to be fair, it’s probably hard to get all of humanity.
Lex Fridman
(00:00:27)
Yeah. The interesting thing about humanity is the diversity in it.
George Hotz
(00:00:30)
Oh, yeah.
Lex Fridman
(00:00:31)
Organisms in general. There’s a lot of weirdos out there, two of them are sitting here.
George Hotz
(00:00:36)
I mean, diversity in humanity is-
Lex Fridman
(00:00:38)
With due respect.
George Hotz
(00:00:40)
I wish I was more weird.
Lex Fridman
(00:00:44)
The following is a conversation with George Hotz, his third time on this podcast. He’s the founder of Comma.ai that seeks to solve autonomous driving and is the founder of a new company called tiny corp that created tinygrad, a neural network framework that is extremely simple with the goal of making it run on any device by any human easily and efficiently. As you know, George also did a large number of fun and amazing things from hacking the iPhone to recently joining Twitter for a bit as a “intern”, making the case for refactoring the Twitter code base.

(00:01:23)
In general he’s a fascinating engineer and human being, and one of my favorite people to talk to. This is a Lex Fridman podcast. To support it please check out our sponsors in the description. Now, dear friends, here’s George Hotz. You mentioned something in a stream about the philosophical nature of time. Let’s start with a wild question. Do you think time is an illusion?

Time is an illusion

George Hotz
(00:01:47)
You know, I sell phone calls to Comma for a thousand dollars and some guy called me. It’s a thousand dollars. You can talk to me for half an hour. He is like, “Yeah, okay. Time doesn’t exist and I really wanted to share this with you.” I’m like, “Oh, what do you mean time doesn’t exist?” I think time is a useful model, whether it exists or not. Right. Does quantum physics exist? Well, it doesn’t matter. It’s about whether it’s a useful model to describe reality. Is time maybe compressive?
Lex Fridman
(00:02:25)
Do you think there is an objective reality or is everything just useful models? Underneath it all is there an actual thing that we’re constructing models for?
George Hotz
(00:02:35)
I don’t know.
Lex Fridman
(00:02:39)
I was hoping you would know.
George Hotz
(00:02:40)
I don’t think it matters.
Lex Fridman
(00:02:42)
I mean, this connects to the models of constructive reality with machine learning, right?
George Hotz
(00:02:47)
Sure.
Lex Fridman
(00:02:49)
Is it just nice to have useful approximations of the world such that we can do something with it?
George Hotz
(00:02:55)
There are things that are real. [inaudible 00:02:57] complexity is real.
Lex Fridman
(00:02:59)
Yeah.
George Hotz
(00:02:59)
Yeah. The compressive-
Lex Fridman
(00:03:00)
Math.
George Hotz
(00:03:02)
Math is real. Yeah.
Lex Fridman
(00:03:03)
Should be a T-shirt.
George Hotz
(00:03:05)
I think hard things are actually hard. I don’t think P equals NP.
Lex Fridman
(00:03:09)
Ooh. Strong words.
George Hotz
(00:03:10)
Well, I think that’s the majority. I do think factoring is in P.
Lex Fridman
(00:03:14)
I don’t think you’re the person that follows the majority in all walks of life.
George Hotz
(00:03:18)
For that one I do
Lex Fridman
(00:03:19)
Yeah. In theoretical computer science, you’re one of the sheep. All right. To you time is a useful model.
George Hotz
(00:03:28)
Sure.
Lex Fridman
(00:03:29)
What were you talking about on the stream about time? Are you made of time?
George Hotz
(00:03:33)
If I remembered half the things I said on stream. Someday someone’s going to make a model of all of it and it’s going to come back to haunt me.
Lex Fridman
(00:03:40)
Someday soon?
George Hotz
(00:03:41)
Yeah, probably.
Lex Fridman
(00:03:42)
Would that be exciting to you or sad that there’s a George Hotz model?
George Hotz
(00:03:48)
I mean, the question is when the George Hotz model is better than George Hotz, like I am declining and the model is growing.
Lex Fridman
(00:03:54)
What is the metric by which you measure better or worse in that, if you are competing with yourself?
George Hotz
(00:04:00)
Maybe you can just play a game where you have the George Hotz answer and the George Hotz model answer and ask which people prefer.
Lex Fridman
(00:04:06)
People close to you or strangers?
George Hotz
(00:04:09)
Either one. It will hurt more when it’s people close to me, but both will be overtaken by the George Hotz model.
Lex Fridman
(00:04:16)
It’d be quite painful. Loved ones, family members would rather have the model over for Thanksgiving than you or significant others would rather sext with the large language model version of you.
George Hotz
(00:04:35)
Especially when it’s fine-tuned to their preferences.
Lex Fridman
(00:04:39)
Yeah. Well, that’s what we’re doing in a relationship. We’re just fine-tuning ourselves, but we’re inefficient with it because we’re selfish and greedy and so on. Language models can fine-tune more efficiently, more selflessly.
George Hotz
(00:04:51)
There’s a Star Trek Voyager episode where Kathryn Janeway lost in the delta quadrant makes herself a lover on the Holodeck, and the lover falls asleep on her arm and he snores a little bit. Janeway edits the program to remove that. Then of course the realization is, wait, this person’s terrible. It is actually all their nuances and quirks and slight annoyances that make this relationship worthwhile. I don’t think we’re going to realize that until it’s too late.
Lex Fridman
(00:05:24)
Well, I think a large language model could incorporate the flaws and the quirks and all that kind of stuff.
George Hotz
(00:05:30)
Just the perfect amount of quirks and flaws to make you charming without crossing the line.
Lex Fridman
(00:05:36)
Yeah, and that’s probably a good approximation of the percent of time the language model should be cranky or an asshole or jealous or all this kind of stuff.
George Hotz
(00:05:52)
Of course it can and it will. All that difficulty at that point is artificial. There’s no more real difficulty.
Lex Fridman
(00:05:59)
What’s the difference between real and artificial?
George Hotz
(00:06:01)
Artificial difficulty is difficulty that’s like constructed or could be turned off with a knob. Real difficulty is like you’re in the woods and you got to survive.
Lex Fridman
(00:06:11)
If something cannot be turned off with a knob it’s real?
George Hotz
(00:06:16)
Yeah, I think so. I mean, you can’t get out of this by smashing the knob with a hammer. I mean, maybe you can, Into the Wild when Alexander Supertramp, he wants to explore something that’s never been explored before, but it’s the nineties. Everything’s been explored. He’s like, “Well, I’m just not going to bring a map.”
Lex Fridman
(00:06:36)
Yeah.
George Hotz
(00:06:36)
I mean, no, you’re not exploring. You should have brought a map dude. You died. There was a bridge a mile from where you were camping.
Lex Fridman
(00:06:44)
How does that connect to the metaphor of the knob?
George Hotz
(00:06:46)
By not bringing the map, you didn’t become an explorer. You just smashed the thing.
Lex Fridman
(00:06:53)
Yeah.
George Hotz
(00:06:53)
Yeah. The difficulty is still artificial.
Lex Fridman
(00:06:56)
You failed before you started. What if we just don’t have access to the knob?
George Hotz
(00:07:00)
Well, that maybe is even scarier. We already exist in a world of nature, and nature has been fine-tuned over billions of years. To have humans build something and then throw the knob away in some grand romantic gesture is horrifying.
Lex Fridman
(00:07:21)
Do you think of us humans as individuals that are born and die or are we just all part of one living organism that is earth, that is nature?
George Hotz
(00:07:33)
I don’t think there’s a clear line there. I think it’s all kind of just fuzzy. I don’t know. I mean, I don’t think I’m conscious. I don’t think I’m anything. I think I’m just a computer program.
Lex Fridman
(00:07:44)
It’s all computation, everything running in your head is just computation.
George Hotz
(00:07:49)
Everything running in the universe is computation, I think. I believe the extended [inaudible 00:07:53] thesis.
Lex Fridman
(00:07:56)
There seems to be an embodiment to your particular computation. There’s a consistency.
George Hotz
(00:08:00)
Well, yeah, but I mean, models have consistency too.
Lex Fridman
(00:08:04)
Yeah.
George Hotz
(00:08:05)
Models that have been RLHF’d will continually say like, well, how do I murder ethnic minorities? Oh, well, I can’t let you do that, Hal. There’s a consistency to that behavior.
Lex Fridman
(00:08:15)
It’s all RLHF. We RLHF each other. We provide human feedback and thereby fine-tune these little pockets of computation. It’s still unclear why that pocket of computation stays with you for years. You have this consistent set of physics, biology, whatever you call the neurons firing like the electrical signals, the mechanical signals, all of that that seems to stay there. It contains information. It stores information, and that information permeates through time and stays with you. There’s like memory, there’s like sticky.
George Hotz
(00:09:01)
To be fair, a lot of the models we’re building today are very… Even RLHF is nowhere near as complex as the human loss function.
Lex Fridman
(00:09:08)
Reinforcement learning with human feedback.
George Hotz
(00:09:11)
When I talked about will GPT12 be AGI, my answer is no. Of course not. I mean, cross-entropy loss is never going to get you there. You need probably RL in fancy environments in order to get something that would be considered AGI-like. To ask the question about why? I don’t know. It’s just some quirk of evolution. I don’t think there’s anything particularly special about where I ended up, where humans ended up.
Lex Fridman
(00:09:40)
Okay, we have human level intelligence. Would you call that AGI, whatever we have, GI?
George Hotz
(00:09:47)
Look, actually, I don’t really even like the word AGI, but general intelligence is defined to be whatever humans have.
Lex Fridman
(00:09:55)
Okay, so why can GPT-12 not get us to AGI? Can we just linger on that?
George Hotz
(00:10:02)
If your loss function is categorical cross-entropy, if your loss function is just try to maximize compression. I have a SoundCloud I rap and I tried to get Chat-GPT to help me write raps and the raps that it wrote sounded like YouTube comment raps. You can go on any rap beat online and you can see what people put in the comments. It’s the most mid quality rap you can find.
Lex Fridman
(00:10:23)
Is mid good or bad?
George Hotz
(00:10:24)
Mid is bad.
Lex Fridman
(00:10:25)
Mid is bad.
George Hotz
(00:10:25)
It’s like mid.
Lex Fridman
(00:10:27)
Every time I talk to you, I learn new words. Mid.
George Hotz
(00:10:32)
Mid. Yeah.
Lex Fridman
(00:10:35)
I was like, is it like basic? Is that what mid means?
George Hotz
(00:10:37)
Kind of. It’s like middle of the curve, right?
Lex Fridman
(00:10:39)
Yeah.
George Hotz
(00:10:40)
There’s like that intelligence curve and you have the dumb guy, the smart guy, and then the mid guy. Actually being the mid guy is the worst. The smart guy is like I put all my money in Bitcoin. The mid guy is like, “You can’t put money in Bitcoin. It’s not real money.”
Lex Fridman
(00:10:55)
All of it is a genius meme. That’s another interesting one. Memes, the humor, the idea, the absurdity encapsulated in a single image and it just propagates virally between all of our brains. I didn’t get much sleep last night, so I sound like I’m high. I swear I’m not. Do you think we have ideas or ideas have us?

Memes

George Hotz
(00:11:24)
I think that we’re going to get super scary memes once the AIs actually are superhuman.
Lex Fridman
(00:11:30)
You think AI will generate memes?
George Hotz
(00:11:31)
Of course.
Lex Fridman
(00:11:32)
You think it’ll make humans laugh?
George Hotz
(00:11:35)
I think it’s worse than that. Infinite Jest, it’s introduced in the first 50 pages, is about a tape that once you watch it once you only ever want to watch that tape. In fact, you want to watch the tape so much that someone says, “Okay, here’s a hack saw. Cut off your pinky and then I’ll let you watch the tape again.” You’ll do it. We’re actually going to build that, I think, but it’s not going to be one static tape. I think the human brain is too complex to be stuck in one static tape like that. If you look at ant brains, maybe they can be stuck on a static tape, but we’re going to build that using generative models. We’re going to build the TikTok that you actually can’t look away from.
Lex Fridman
(00:12:16)
TikTok is already pretty close there, but the generation is done by humans. The algorithm is just doing their recommendation. If the algorithm is also able to do the generation.
George Hotz
(00:12:25)
Well, it’s a question about how much intelligence is behind it. The content is being generated by let’s say, one humanity worth of intelligence, and you can quantify a humanity, its exaflops, [inaudible 00:12:40], but you can quantify it. Once that generation is being done by a hundred humanities, you’re done.
Lex Fridman
(00:12:48)
It’s actually scale that’s the problem, but also speed. Yeah. What if it’s manipulating the very limited human dopamine engine, so porn? Imagine just TikTok, but for porn.
George Hotz
(00:13:05)
Yeah.
Lex Fridman
(00:13:06)
It’s like a brave new world.
George Hotz
(00:13:08)
I don’t even know what it’ll look like. Again, you can’t imagine the behaviors of something smarter than you, but a super intelligent, an agent that just dominates your intelligence so much will be able to completely manipulate you.
Lex Fridman
(00:13:24)
Is it possible that it won’t really manipulate? It’ll just move past us. It’ll just exist the way water exists or the air exists.
George Hotz
(00:13:33)
You see, and that’s the whole AI safety thing. It’s not the machine that’s going to do that. It’s other humans using the machine that are going to do that to you.
Lex Fridman
(00:13:44)
Because the machine is not interested in hurting humans. It’s just…
George Hotz
(00:13:47)
The machine is a machine, but the human gets the machine and there’s a lot of humans out there very interested in manipulating you.

Eliezer Yudkowsky

Lex Fridman
(00:13:55)
Well, let me bring up, Eliezer Yudkowsky who recently sat where you’re sitting. He thinks that AI will almost surely kill everyone. Do you agree with him or not?
George Hotz
(00:14:09)
Yes, but maybe for a different reason.
Lex Fridman
(00:14:14)
Then I’ll try to get you to find hope or we could find a note to that answer. But why yes?
George Hotz
(00:14:23)
Okay. Why didn’t nuclear weapons kill everyone?
Lex Fridman
(00:14:26)
That’s a good question.
George Hotz
(00:14:27)
I think there’s an answer. I think it’s actually very hard to deploy nuclear weapons tactically. It’s very hard to accomplish tactical objectives. Great. I can nuke their country. I have an irradiated pile of rubble. I don’t want that.
Lex Fridman
(00:14:39)
Why not?
George Hotz
(00:14:40)
Why don’t I want an irradiated pile of rubble?
Lex Fridman
(00:14:43)
Yeah.
George Hotz
(00:14:43)
For all the reasons no one wants an irradiated pile of rubble.
Lex Fridman
(00:14:46)
Oh, because you can’t use that land for resources. You can’t populate the land.
George Hotz
(00:14:52)
Yeah. Well, what you want, a total victory in a war is not usually the irradiation and eradication of the people there. It’s the subjugation and domination of the people.
Lex Fridman
(00:15:03)
Okay. You can’t use this strategically, tactically in a war to help gain a military advantage. It’s all complete destruction. All right.
George Hotz
(00:15:16)
Yeah.
Lex Fridman
(00:15:16)
There’s egos involved. It’s still surprising that nobody pressed the big red button.
George Hotz
(00:15:22)
It’s somewhat surprising. You see, it’s the little red button that’s going to be pressed with AI, and that’s why we die. It’s not because the AI, if there’s anything in the nature of AI, it’s just the nature of humanity.
Lex Fridman
(00:15:37)
What’s the algorithm behind the little red button? What possible ideas do you have for how human species ends?
George Hotz
(00:15:45)
Sure. I think the most obvious way to me is wire heading. We end up amusing ourselves to death. We end up all staring at that infinite TikTok and forgetting to eat. Maybe it’s even more benign than this. Maybe we all just stop reproducing. Now, to be fair, it’s probably hard to get all of humanity.
Lex Fridman
(00:16:10)
Yeah.
George Hotz
(00:16:11)
Yeah. It probably is.
Lex Fridman
(00:16:15)
The interesting thing about humanity is the diversity in it.
George Hotz
(00:16:17)
Oh yeah.
Lex Fridman
(00:16:18)
Organisms in general. There’s a lot of weirdos out there. Well, two of them are sitting here.
George Hotz
(00:16:23)
I mean, diversity in humanity is-
Lex Fridman
(00:16:25)
With due respect.
George Hotz
(00:16:27)
I wish I was more weird. No, look, I’m drinking Smart water, man. That’s like a Coca-Cola product, right?
Lex Fridman
(00:16:33)
You went corporate George Hotz.
George Hotz
(00:16:35)
Yeah, I went corporate. No, the amount of diversity and humanity I think is decreasing. Just like all the other biodiversity on the planet.
Lex Fridman
(00:16:42)
Oh boy. Yeah.
George Hotz
(00:16:43)
Right.
Lex Fridman
(00:16:44)
Social media’s not helping.
George Hotz
(00:16:45)
Go eat McDonald’s in China.
Lex Fridman
(00:16:47)
Yeah.
George Hotz
(00:16:49)
Yeah. No, it’s the interconnectedness that’s doing it.
Lex Fridman
(00:16:54)
Oh, that’s interesting. Everybody starts relying on the connectivity of the internet. Over time, that reduces the diversity, the intellectual diversity, and then that gets everybody into a funnel. There’s still going to be a guy in Texas.
George Hotz
(00:17:08)
There is.
Lex Fridman
(00:17:09)
And a bunker.
George Hotz
(00:17:10)
To be fair, do I think AI kills us all? I think AI kills everything we call society today. I do not think it actually kills the human species. I think that’s actually incredibly hard to do.
Lex Fridman
(00:17:22)
Yeah, but society, if we start over, that’s tricky. Most of us don’t know how to do most things.
George Hotz
(00:17:28)
Yeah, but some of us do, and they’ll be okay and they’ll rebuild after the great AI.
Lex Fridman
(00:17:36)
What’s rebuilding look like? How much do we lose? What has human civilization done that’s interesting? Combustion engine, electricity. So power and energy. That’s interesting. How to harness energy.
George Hotz
(00:17:54)
Whoa, whoa, whoa, whoa. They’re going to be religiously against that.
Lex Fridman
(00:17:58)
Are they going to get back to fire?
George Hotz
(00:18:02)
Sure. I mean, it’s be like some kind of Amish looking kind of thing. I think they’re going to have very strong taboos against technology.
Lex Fridman
(00:18:13)
Technology is almost like a new religion. Technology is the devil and nature is God.
George Hotz
(00:18:20)
Sure.
Lex Fridman
(00:18:20)
Closer to nature. Can you really get away from AI if it destroyed 99% of the human species, isn’t somehow have a hold like a stronghold?
George Hotz
(00:18:30)
Well, what’s interesting about everything we build, I think we’re going to build super intelligence before we build any sort of robustness in the AI. We cannot build an AI that is capable of going out into nature and surviving like a bird. A bird is an incredibly robust organism. We’ve built nothing like this. We haven’t built a machine that’s capable of reproducing.
Lex Fridman
(00:18:58)
I work with Lego robots a lot now. I have a bunch of them. They’re mobile. They can’t reproduce. All they need is, I guess you’re saying they can’t repair themselves. If you have a large number, if you have a hundred million of them-
George Hotz
(00:19:13)
Let’s just focus on them reproducing. Do they have microchips in them?
Lex Fridman
(00:19:16)
Mm-hmm (affirmative).
George Hotz
(00:19:16)
Okay. Then do they include a fab?
Lex Fridman
(00:19:20)
No.
George Hotz
(00:19:21)
Then how are they going to reproduce?
Lex Fridman
(00:19:22)
Well, it doesn’t have to be all on board. They can go to a factory, to a repair shop.
George Hotz
(00:19:29)
Yeah, but then you’re really moving away from robustness.
Lex Fridman
(00:19:33)
Yes.
George Hotz
(00:19:33)
All of life is capable of reproducing without needing to go to a repair shop. Life will continue to reproduce in the complete absence of civilization. Robots will not. If the AI apocalypse happens, I mean the AIs are going to probably die out because I think we’re going to get, again, super intelligence long before we get robustness.
Lex Fridman
(00:19:55)
What about if you just improve the fab to where you just have a 3D printer that can always help you?
George Hotz
(00:20:03)
Well, that’d be very interesting. I’m interested in building that.
Lex Fridman
(00:20:06)
Of course, you are. How difficult is that problem to have a robot that basically can build itself?
George Hotz
(00:20:15)
Very, very hard.
Lex Fridman
(00:20:16)
I think you’ve mentioned this to me or somewhere where people think it’s easy conceptually.
George Hotz
(00:20:24)
Then they remember that you’re going to have to have a fab.
Lex Fridman
(00:20:27)
Yeah, on board.
George Hotz
(00:20:30)
Of course.
Lex Fridman
(00:20:30)
3D printer that prints a 3D printer.
George Hotz
(00:20:34)
Yeah.
Lex Fridman
(00:20:34)
On legs. Why’s that hard?
George Hotz
(00:20:37)
Well, I mean, a 3D printer is a very simple machine, right? Okay, you’re going to print chips, you’re going to have an atomic printer. How are you going to dope the silicon?
Lex Fridman
(00:20:47)
Yeah.
George Hotz
(00:20:48)
Right. How you going to etch the silicon?
Lex Fridman
(00:20:51)
You’re going to have a very interesting kind of fab if you want to have a lot of computation on board. You can do structural type of robots that are dumb.
George Hotz
(00:21:04)
Yeah, but structural type of robots aren’t going to have the intelligence required to survive in any complex environment.
Lex Fridman
(00:21:11)
What about like ants type of systems? We have trillions of them.
George Hotz
(00:21:15)
I don’t think this works. I mean, again, ants at their very core are made up of cells that are capable of individually reproducing.
Lex Fridman
(00:21:22)
They’re doing quite a lot of computation that we’re taking for granted.
George Hotz
(00:21:26)
It’s not even just the computation. It’s that reproduction is so inherent. There’s two stacks of life in the world. There’s the biological stack and the silicon stack. The biological stack starts with reproduction. Reproduction is at the absolute core. The first proto-RNA organisms were capable of reproducing. The silicon stack, despite, as far as it’s come, is nowhere near being able to reproduce.
Lex Fridman
(00:21:51)
Yeah, So the fab movement, digital fabrication, fabrication in the full range of what that means is still in the early stages.
George Hotz
(00:22:04)
Yeah.
Lex Fridman
(00:22:04)
You’re interested in this world?
George Hotz
(00:22:06)
Even if you did put a fab on the machine, let’s say, okay, yeah, we can build fabs. We know how to do that as humanity. We can probably put all the precursors that build all the machines in the fabs also in the machine. First off, this machine’s going to be absolutely massive. I mean, we almost have a… Think of the size of the thing required to reproduce a machine today. Is our civilization capable of reproduction? Can we reproduce our civilization on Mars?
Lex Fridman
(00:22:34)
If we were to construct a machine that is made up of humans, like a company that can reproduce itself?
George Hotz
(00:22:40)
Yeah.
Lex Fridman
(00:22:40)
I don’t know. It feels like 115 people.
George Hotz
(00:22:47)
I think it’s so much harder than that.
Lex Fridman
(00:22:50)
120? I’m looking for a number.
George Hotz
(00:22:52)
Let’s see. I believe that Twitter can be run by 50 people. I think that this is going to take most of, it’s just most of society. We live in one globalized world now.
Lex Fridman
(00:23:04)
No, but you’re not interested in running Twitter, you’re interested in seeding. You want to seed a civilization and then because humans can like have sex.
George Hotz
(00:23:14)
Yeah. Okay. You’re talking about the humans reproducing and basically what’s the smallest self-sustaining colony of humans?
Lex Fridman
(00:23:19)
Yeah.
George Hotz
(00:23:20)
Yeah. Okay, fine but they’re not going to be making five nanometer chips.
Lex Fridman
(00:23:22)
Over time they will. We have to expand our conception of time here going back to the original timescale. I mean, over across maybe a hundred generations we’re back to making chips. No? If you seed the colony correctly.
George Hotz
(00:23:40)
Maybe, or maybe they’ll watch our colony die out over here and be like, “We’re not making chips. Don’t make chips.”
Lex Fridman
(00:23:46)
No, but you have to seed that colony correctly.
George Hotz
(00:23:48)
Whatever you do, don’t make chips. Chips are what led to their downfall.
Lex Fridman
(00:23:54)
Well, that is the thing that humans do. They construct a devil a good thing and a bad thing, and they really stick by that and then they murder each other over that. There’s always one asshole in the room who murders everybody and usually makes tattoos and nice branding with flags and stuff.
George Hotz
(00:24:10)
Do you need that asshole, that’s the question. Humanity works really hard today to get rid of that asshole, but I think they might be important.
Lex Fridman
(00:24:16)
Yeah. This whole freedom of speech thing, it’s the freedom of being an asshole seems kind of important.
George Hotz
(00:24:22)
That’s right.
Lex Fridman
(00:24:23)
Man. This thing, this fab, this human fab that we constructed, this human civilization is pretty interesting. Now it’s building artificial copies of itself or artificial copies of various aspects of itself that seem interesting like intelligence. I wonder where that goes.
George Hotz
(00:24:44)
I like to think it’s just another stack for life. We have the biostack life. We’re a biostack life, and then the silicon stack life.
Lex Fridman
(00:24:50)
It seems like the ceiling, or there might not be a ceiling, or at least the ceiling is much higher for the silicon stack.
George Hotz
(00:24:57)
Oh, no. We don’t know what the ceiling is for the biostack either. The biostack just seems to move slower. You have Moore’s law, which is not dead despite many proclamations.
Lex Fridman
(00:25:09)
In the biostack or the silicon stack?
George Hotz
(00:25:11)
In the silicon stack. You don’t have anything like this in the biostack. I have a meme that I posted. I tried to make a meme. It didn’t work too well, but I posted a picture of Ronald Reagan and Joe Biden, and you look, this is 1980 and this is 2020.
Lex Fridman
(00:25:24)
Yeah.
George Hotz
(00:25:24)
These two humans are basically the same, right? No, there’s been no change in humans in the last 40 years. Then I posted a computer from 1980 in a computer from 2020. Wow.
Lex Fridman
(00:25:41)
Yeah. With their early stages, which is why you said, when you said the size of the fab required to make another fab is very large right now.
George Hotz
(00:25:52)
Yeah.
Lex Fridman
(00:25:53)
Computers were very large 80 years ago, and they got pretty tiny and people are starting to want to wear them on their face in order to escape reality. That’s a thing. In order to live inside the computer, but a screen right here, I don’t have to see the rest of you assholes.
George Hotz
(00:26:18)
I’ve been ready for a long time.

Virtual reality

Lex Fridman
(00:26:19)
You like virtual reality?
George Hotz
(00:26:20)
I love it.
Lex Fridman
(00:26:22)
Do you want to live there?
George Hotz
(00:26:23)
Yeah.
Lex Fridman
(00:26:25)
Yeah. Part of me does too. How far away are we do you think?
George Hotz
(00:26:31)
Judging from what you can buy today? Far, very far.
Lex Fridman
(00:26:35)
I got to tell you that I had the experience of Meta’s Codec avatar where it’s a ultra-high resolution scan. It looked real.
George Hotz
(00:26:51)
I mean, the headsets just are not quite at eye resolution yet. I haven’t put on any headset where I’m like, “Oh, this could be the real world.” Whereas when I put good headphones on, audio is there. We can reproduce audio that I’m like, “I’m actually in a jungle right now. If I close my eyes, I can’t tell I’m not.”
Lex Fridman
(00:27:09)
Yeah. Then there’s also smell and all that kind of stuff.
George Hotz
(00:27:11)
Sure.
Lex Fridman
(00:27:13)
I don’t know. The power of imagination or the power of the mechanism in the human mind that fills the gaps that reaches and wants to make the thing you see in the virtual world real to you. I believe in that power.
George Hotz
(00:27:29)
Or humans want to believe.
Lex Fridman
(00:27:30)
Yeah. What if you’re lonely? What if you’re sad? What if you’re really struggling in life, and here’s a world where you don’t have to struggle anymore?
George Hotz
(00:27:39)
Humans want to believe so much that people think the large language models are conscious. That’s how much humans want to believe.
Lex Fridman
(00:27:46)
Strong words, he’s throwing left and right hooks. Why do you think large language models are not conscious?
George Hotz
(00:27:53)
I don’t think I’m conscious.
Lex Fridman
(00:27:55)
Oh, so what is consciousness then George Hotz?
George Hotz
(00:27:58)
It’s like what it seems to mean to people it’s just a word that atheists use for souls.
Lex Fridman
(00:28:04)
Sure. That doesn’t mean soul is not an interesting word.
George Hotz
(00:28:08)
If consciousness is a spectrum, I’m definitely way more conscious than the large language models are. I think the large language models are less conscious than a chicken.
Lex Fridman
(00:28:19)
When is the last time you’ve seen a chicken?
George Hotz
(00:28:22)
In Miami, a couple months ago.
Lex Fridman
(00:28:26)
No. A living chicken.
George Hotz
(00:28:27)
Just living chickens walking around Miami. It’s crazy.
Lex Fridman
(00:28:30)
Like on the street?
George Hotz
(00:28:30)
Yeah.
Lex Fridman
(00:28:31)
Like a chicken?
George Hotz
(00:28:32)
A chicken. Yeah.
Lex Fridman
(00:28:36)
All right. I was trying to call you out, like a good journalist, and I got shut down. Okay. You don’t think much about this subjective feeling that it feels like something to exist. Then as an observer, you can have a sense that an entity is not only intelligent, but has a subjective experience of its reality, like a self-awareness that is capable of suffering, of hurting, of being excited by the environment in a way that’s not merely an artificial response, but a deeply felt one.
George Hotz
(00:29:22)
Humans want to believe so much that if I took a rock and a Sharpie and drew a sad face on the rock, they’d think the rock is sad.
Lex Fridman
(00:29:32)
You’re saying when we look in the mirror, we apply the same smiley face with rock?
George Hotz
(00:29:36)
Pretty much, yeah.
Lex Fridman
(00:29:38)
Isn’t that weird though, that you’re not conscious?
George Hotz
(00:29:42)
No.
Lex Fridman
(00:29:43)
You do believe in consciousness?
George Hotz
(00:29:45)
Not really.
Lex Fridman
(00:29:46)
It’s unclear. Okay. To you it’s like a little symptom of the bigger thing that’s not that important.
George Hotz
(00:29:53)
Yeah. I mean, it’s interesting that the human systems seem to claim that they’re conscious, and I guess it says something in a straight up, even if you don’t believe in consciousness, what do people mean when they say consciousness? There’s definitely meanings to it.
Lex Fridman
(00:30:06)
What’s your favorite thing to eat?
George Hotz
(00:30:11)
Pizza.
Lex Fridman
(00:30:12)
Cheese pizza. What are the toppings?
George Hotz
(00:30:13)
I like cheese pizza. I like pepperoni.
Lex Fridman
(00:30:14)
Don’t say pineapple.
George Hotz
(00:30:15)
No, I don’t like pineapple.
Lex Fridman
(00:30:16)
Okay. Pepperoni pizza.
George Hotz
(00:30:17)
If they put any ham on it I’ll just feel bad.
Lex Fridman
(00:30:20)
What’s the best pizza? What are we talking about here? Do you like cheap, crappy pizza?
George Hotz
(00:30:24)
A Chicago deep dish cheese pizza. Oh, that’s my favorite.
Lex Fridman
(00:30:27)
There you go. You bite into a Chicago deep dish pizza, and it feels like, so you were starving, you haven’t eaten for 24 hours. You just bite in and you’re hanging out with somebody that matters a lot to you. You’re there with the pizza.
George Hotz
(00:30:39)
That sounds real nice, man.
Lex Fridman
(00:30:40)
Yeah. All right. It feels like something I’m George motherfucking Hotz eating a fucking Chicago deep dish pizza. There’s just the full peak living experience of being human, the top of the human condition.
George Hotz
(00:30:57)
Sure.
Lex Fridman
(00:30:58)
It feels like something to experience that.
George Hotz
(00:31:00)
Mm-hmm (affirmative).
Lex Fridman
(00:31:02)
Why does it feel like something? That’s consciousness, isn’t it?
George Hotz
(00:31:06)
If that’s the word you want to use to describe it. Sure. I’m not going to deny that that feeling exists. I’m not going to deny that I experienced that feeling. I guess what I take issue to is that there’s some like how does it feel to be a web server? Do 404s hurt?
Lex Fridman
(00:31:23)
Not yet.
George Hotz
(00:31:24)
How would you know what suffering looked like? Sure you can recognize a suffering dog because we’re the same stack as the dog. All the biostack stuff kind of, especially mammals. It’s really easy. You can…
Lex Fridman
(00:31:35)
Game recognizes game.
George Hotz
(00:31:37)
Yeah. Versus the silicon stack stuff it’s like, you have no idea. Wow the little thing has learned to mimic. Then I realized that that’s all we are too. Well, look, the little thing has learned to mimic.
Lex Fridman
(00:31:54)
Yeah. I guess, yeah. 404 could be suffering, but it’s so far from our kind-
Lex Fridman
(00:32:03)
… So far from our kind of living organism, our kind of stack. It feels like AI can start maybe mimicking the biological stack better, better, better. It’s trained.
George Hotz
(00:32:13)
We trained it, yeah.
Lex Fridman
(00:32:15)
In that, maybe that’s the definition of consciousness is the bio stack consciousness.
George Hotz
(00:32:20)
The definition of consciousness is how close something looks to human. Sure, I’ll give you that one.
Lex Fridman
(00:32:24)
No, how close something is to the human experience.
George Hotz
(00:32:28)
Sure. It’s a very anthropro-centric definition, but…
Lex Fridman
(00:32:33)
Well, that’s all we got.

AI friends

George Hotz
(00:32:34)
Sure. No. I think there’s a lot of value in it. Look, I just started my second company. My third company will be AI Girlfriends. I mean it.
Lex Fridman
(00:32:43)
I want to find out what your fourth company is after that.
George Hotz
(00:32:46)
Oh, wow.
Lex Fridman
(00:32:46)
I think once you have AI girlfriends, oh boy, does it get interesting. Well, maybe let’s go there. The relationships with AI, that’s creating human-like organisms. Part of being human is being conscious, is having the capacity to suffer, having the capacity to experience this life richly, in such a way that you can empathize, that AI system going to empathize with you, and you can empathize with it, or you can project your anthropomorphic sense of what the other entity is experiencing.

(00:33:22)
An AI model would need to create that experience inside your mind. It doesn’t seem that difficult.
George Hotz
(00:33:28)
Yeah. Okay, so here’s where it actually gets totally different. When you interact with another human, you can make some assumptions.
Lex Fridman
(00:33:37)
Yeah.
George Hotz
(00:33:38)
When you interact with these models, you can’t. You can make some assumptions that other human experiences suffering and pleasure in a pretty similar way to you do, the golden rule applies. With an AI model, this isn’t really true. These large language models are good at fooling people, because they were trained on a whole bunch of human data and told to mimic it.
Lex Fridman
(00:33:59)
Yep, but if the AI system says, “Hi, my name is Samantha,” it has a backstory. “Went to college here and there,” maybe it’ll integrate this in the AI system.
George Hotz
(00:34:11)
I made some chatbots. I gave them back stories. It was lots of fun. I’m so happy when Lama came out.
Lex Fridman
(00:34:16)
Yeah. Well, we’ll talk about Lama, we’ll talk about all that. The rock with a smiley face, it seems pretty natural for you to anthropomorphize that thing and then start dating it. Before you know it, you’re married and have kids
George Hotz
(00:34:33)
With a rock?
Lex Fridman
(00:34:34)
With a rock, and there’s pictures on Instagram with you and a rock and a smiley face.
George Hotz
(00:34:38)
To be fair, something that people generally look for when they’re looking for someone to date is intelligence in some form. The rock doesn’t really have intelligence. Only a pretty desperate person would date a rock.
Lex Fridman
(00:34:50)
I think we’re all desperate, deep down.
George Hotz
(00:34:52)
Oh, not rock level desperate.
Lex Fridman
(00:34:54)
All right. Not rock level desperate, but AI level desperate. I don’t know. I think all of us have a deep loneliness. It just feels like the language models are there.
George Hotz
(00:35:09)
Oh, I agree. You know what? I won’t even say this so cynically. I will actually say this in a way that I want AI friends. I do.
Lex Fridman
(00:35:14)
Yeah.
George Hotz
(00:35:16)
I would love to. Again, the language models now are still a little… People are impressed with these GPT things, or the Copilot, the coding one. I’m like, “Okay, this is junior engineer level, and these people are Fiverr level artists and copywriters.” Okay, great. We got Fiverr and junior engineers. Okay, cool. This is just the start, and it will get better, right? I can’t wait to have AI friends who are more intelligent than I am.
Lex Fridman
(00:35:50)
Fiverr is just a temporary, it’s not the ceiling?
George Hotz
(00:35:52)
No, definitely not.
Lex Fridman
(00:35:53)
Does it count as cheating when you’re talking to an AI model? Emotional cheating?
George Hotz
(00:36:03)
That’s up to you and your human partner to define.
Lex Fridman
(00:36:07)
Oh, you have to. All right.
George Hotz
(00:36:08)
You to have that conversation, I guess.
Lex Fridman
(00:36:12)
All right. Integrate that with porn and all this stuff.
George Hotz
(00:36:16)
Well, no, it’s similar kind of to porn.
Lex Fridman
(00:36:18)
Yeah.
George Hotz
(00:36:18)
Yeah. I think people in relationships have different views on that.
Lex Fridman
(00:36:23)
Yeah, but most people don’t have serious, open conversations about all the different aspects of what’s cool and what’s not. It feels like AI is a really weird conversation to have.
George Hotz
(00:36:38)
The porn one is a good branching off.
Lex Fridman
(00:36:40)
For sure.
George Hotz
(00:36:40)
One of my scenarios that I put in my chatbot is a nice girl named Lexi, she’s 20. She just moved out to LA. She wanted to be an actress, but she started doing Only Fans instead. You’re on a date with her. Enjoy.
Lex Fridman
(00:36:56)
Oh, man. Yeah. If you’re actually dating somebody in real life, is that cheating? I feel like it gets a little weird.
George Hotz
(00:37:05)
Sure.
Lex Fridman
(00:37:05)
It gets real weird. It’s like, what are you allowed to say to an AI bot? Imagine having that conversation with a significant other.
George Hotz
(00:37:11)
These are all things for people to define in their relationships. What it means to be human is just going to start to get weird.
Lex Fridman
(00:37:17)
Especially online. How do you know? There’ll be moments when you’ll have what you think is a real human you’re interacting with on Twitter for years, and you realize it’s not.
George Hotz
(00:37:28)
I spread, I love this meme, heaven banning. You hear about shadow-banning?
Lex Fridman
(00:37:33)
Yeah.
George Hotz
(00:37:34)
Right. Shadow-banning, okay, you post, no one can see it. Heaven banning, you post. No one can see it, but a whole lot of AIs are spot up to interact with you.
Lex Fridman
(00:37:44)
Well, maybe that’s what the way human civilization ends is all of us are heaven banned.
George Hotz
(00:37:48)
There’s a great, it’s called My Little Pony Friendship is optimal. It’s a sci-fi story that explores this idea.
Lex Fridman
(00:37:56)
Friendship is Optimal.
George Hotz
(00:37:57)
Friendship is Optimal.
Lex Fridman
(00:37:58)
Yeah. I’d like to have some, at least on the intellectual realm, some AI friends that argue with me. The romantic realm is weird, definitely weird, but not out of the realm of the kind of weirdness that human civilization is capable of, I think.
George Hotz
(00:38:20)
Look, I want it. If no one else wants it, I want it.
Lex Fridman
(00:38:23)
Yeah. I think a lot of people probably want it. There’s a deep loneliness.
George Hotz
(00:38:27)
I’ll fill their loneliness, and it just will only advertise to you some of the time.
Lex Fridman
(00:38:33)
Yeah. Maybe the conceptions of monogamy change too. I grew up in a time, I value monogamy, but maybe that’s a silly notion when you have arbitrary number of AI systems.
George Hotz
(00:38:43)
Yeah, on this interesting path from rationality to polyamory. Yeah. That doesn’t make sense for me,
Lex Fridman
(00:38:50)
For you, but you’re just a biological organism who was born before the internet really took off.
George Hotz
(00:38:58)
The crazy thing is, culture is whatever we define it as. These things are not… [inaudible 00:39:04] a problem and moral philosophy, right? Okay. What might be that computers are capable of mimicking girlfriends perfectly. They passed the girlfriend Turing test, but that doesn’t say anything about ought.

(00:39:18)
That doesn’t say anything about how we ought to respond to them as a civilization. That doesn’t say we ought to get rid of monogamy. Right. That’s a completely separate question, really, a religious one.
Lex Fridman
(00:39:27)
Girlfriend Turing test. I wonder what that looks like.
George Hotz
(00:39:30)
Girlfriend Turing test.
Lex Fridman
(00:39:31)
Are you writing that? Will you be the Alan Turing of the 21st century that writes the Girlfriend Turing test?
George Hotz
(00:39:38)
No, of course, my AI girlfriends, their goal is to pass the girlfriend Turing test.
Lex Fridman
(00:39:43)
No, but there should be a paper that kind of defines the test. The question is if it’s deeply personalized, or if there’s a common thing that really gets everybody.
George Hotz
(00:39:55)
Yeah. Look, we’re a company. We don’t have to get everybody. We just have to get a large enough clientele to stay with us.

tiny corp

Lex Fridman
(00:40:01)
I like how you’re already thinking company. All right. Before we go to company number three and company number four, let’s go to company number two.
George Hotz
(00:40:09)
All right.
Lex Fridman
(00:40:09)
Tiny Corp, possibly one of the greatest names of all time for a company. You’ve launched a new company called Tiny Corp that leads the development of Tinygrad. What’s the origin story of Tiny Corp and Tinygrad?
George Hotz
(00:40:25)
I started Tinygrad as a toy project, just to teach myself, okay, what is a convolution? What are all these options you can pass to them? What is the derivative of convolution? Very similar to Karpathy wrote Micrograd. I’m very similar. Then I started realizing, I started thinking about AI chips. I started thinking about chips that run AI. I was like, “Well, okay. This is going to be a really big problem. If Nvidia becomes a monopoly here, how long before Nvidia is nationalized?”
Lex Fridman
(00:41:04)
One of the reasons to start Tiny Corp is to challenge Nvidia.
George Hotz
(00:41:10)
It’s not so much to challenge Nvidia. Actually, I like Nvidia. It’s to make sure power stays decentralized.
Lex Fridman
(00:41:21)
Yeah. Here, it’s computational power. To you, Nvidia is kind of locking down the computational power of the world.
George Hotz
(00:41:31)
Nvidia becomes just like 10X better than everything else, you’re giving a big advantage to somebody who can secure Nvidia as a resource.
Lex Fridman
(00:41:41)
Yeah.
George Hotz
(00:41:42)
In fact, if Jensen watches this podcast, he may want to consider this. He may want to consider making sure his company’s not nationalized.
Lex Fridman
(00:41:50)
Do you think that’s an actual threat?
George Hotz
(00:41:52)
Oh, yes.
Lex Fridman
(00:41:55)
No, but there’s so much, there’s AMD.
George Hotz
(00:41:57)
We have Nvidia and AMD. Great.
Lex Fridman
(00:42:00)
All right. You don’t think there’s a push towards selling Google selling TPUs or something like this? You don’t think there’s a push for that?
George Hotz
(00:42:10)
Have you seen it? Google loves to rent you TPUs.
Lex Fridman
(00:42:14)
It doesn’t, you can’t buy it at Best Buy?
George Hotz
(00:42:18)
No.
Lex Fridman
(00:42:18)
Okay.
George Hotz
(00:42:18)
I started work on a chip. I was like, “Okay, what’s it going to take to make a chip?” My first notions were all completely wrong about why, about how you could improve on GPUs. I’ll take this, this is from Jim Keller on your podcast. This is one of my absolute favorite descriptions of computation. There’s three kinds of computation paradigms that are common in the world today.

(00:42:45)
There’s CPUs, and CPUs can do everything. CPUs can do add and multiply. They can do load and store, and they can do compare and branch. When I say they can do these things, they can do them all fast. Compare and branch are unique to CPUs. What I mean by they can do them fast is they can do things like branch prediction, and speculative execution, and they spend tons of transistors on these super deep reorder buffers in order to make these things fast.

(00:43:09)
Then you have a simpler computation model, GPUs. GPUs can’t really do compare and branch. They can, but it’s horrendously slow. GPUs can do arbitrary load and store. GPUs can do things like X, dereference Y, so they can fetch from arbitrary pieces of memory. They can fetch from memory that is defined by the contents of the data.

(00:43:27)
The third model of computation is DSPs. DSPs are just a and multiply. They can do loads and stores, but only static load and stores. Only loads and stores that are known before the program runs. You look at neural networks today, and 95% of neural networks are all the DSP paradigm. They are just statically scheduled adds and multiplies. Tiny Corp really took this idea, and I’m still working on it to extend this as far as possible, every stage of the stack has Turing completeness.

(00:43:58)
Python has Turing completeness, and then we take Python, we go into C++, which is Turing complete, and then maybe C++ calls into some CUDA kernels, which are Turing complete. The CUDA kernels go through LVM, which is Turing complete, into PTX, which is Turing complete, into SaaS, which is Turing complete, on a Turing complete processor. I want to get Turing completeness out of the stack entirely.

(00:44:15)
Once you get rid of Turing completeness, you can reason about things. Rice’s Theorem and the halting problem do not apply to [inaudible 00:44:20] machines.
Lex Fridman
(00:44:23)
Okay. What’s the power and the value of getting Turing completeness out of, are we talking about the hardware or the software?
George Hotz
(00:44:31)
Every layer of the stack.
Lex Fridman
(00:44:32)
Every layer.
George Hotz
(00:44:32)
Every layer of the stack. Removing Turing completeness allows you to reason about things. The reason you need to do branch prediction in a CPU, and the reason it’s prediction, and the branch predictors are, I think they’re like 99% on CPUs. Why do they get 1% of them wrong? Well, they get 1% wrong because you can’t know. That’s the halting problem. It’s equivalent to the halting problem to say whether a branch is going to be taken or not.

(00:44:56)
I can show that. The ADMO machine, the neural network runs the identical compute every time. The only thing that changes is the data. When you realize this, you think about, “Okay, how can we build a computer, and how can we build a stack that takes maximal advantage of this idea?”

(00:45:19)
What makes Tinygrad different from other neural network libraries is it does not have a primitive operator even for matrix multiplication. This is every single one. They even have primitive operators for things like convolutions.
Lex Fridman
(00:45:31)
No MatMul?
George Hotz
(00:45:32)
No MatMul. Well, here’s what a MatMul is. I’ll use my hands to talk here. If you think about a cube, and I put my two matrices that I’m multiplying on two faces of the cube, you can think about the matrix, multiply as, okay, the end cubed, I’m going to multiply for each one in the cubed. Then I’m going to do a sum, which is a reduce, up to here to the third phase of the cube. That’s your multiplied matrix.

(00:45:56)
What a matrix multiply is is a bunch of shape operations, a bunch of permute three shapes and expands on the two matrices, a multiply and cubed, a reduce and cubed, which gives you an N-squared matrix.
Lex Fridman
(00:46:09)
Okay. What is the minimum number of operations it can accomplish that if you don’t have MatMul as a primitive?
George Hotz
(00:46:16)
Tinygrad has about 20, and you can compare Tinygrad’s op set or IR to things like XLA or Prim Torch. XLA and Prim Torch are ideas where like, okay, Torch has like 2000 different kernels. PyTorch 2.0 introduced Prim Torch, which has only 250. Tinygrad has order of magnitude 25. It’s 10X less than XLA or Prim Torch. You can think about it as kind of RISC versus SISC, right? These other things are SISC-like systems. Tinygrad is RISC.
Lex Fridman
(00:46:53)
RISC won.
George Hotz
(00:46:54)
RISC architecture is going to change everything. 1995, Hackers.
Lex Fridman
(00:46:59)
Wait, really? That’s an actual thing?
George Hotz
(00:47:01)
Angelina Jolie delivers the line, “RISC architecture is going to change everything,” in 1995.
Lex Fridman
(00:47:06)
Wow.
George Hotz
(00:47:06)
Here we are with ARM and the phones and ARM everywhere.
Lex Fridman
(00:47:10)
Wow. I love it when movies actually have real things in them.
George Hotz
(00:47:13)
Right?
Lex Fridman
(00:47:14)
Okay, interesting. You’re thinking of this as the RISC architecture of ML Stack. 25, huh? Can you go through the four OP types?
George Hotz
(00:47:29)
Sure. Okay. You have unary ops, which take in a tensor and return a tensor of the same size, and do some unary op to it. X, log, reciprocal, sin. They take in one and they’re point-wise.
Lex Fridman
(00:47:44)
Relu.
George Hotz
(00:47:48)
Yeah, Relu. Almost all activation functions are unary ops. Some combinations of unary ops together is still a unary op. Then you have binary ops. Binary ops are like point-wise addition, multiplication, division, compare. It takes in two tensors of equal size, and outputs one tensor. Then you have reduce ops. Reduce ops will like take a three-dimensional tensor and turn it into a two-dimensional tensor, or a three-dimensional tensor, and turn into a zero dimensional tensor.

(00:48:17)
Think like a sum or a max are really common ones there. Then the fourth type is movement ops. Movement ops are different from the other types, because they don’t actually require computation. They require different ways to look at memory. That includes reshapes, permutes, expands, flips. Those are the main ones, probably.
Lex Fridman
(00:48:35)
With that, you have enough to make a MatMul?
George Hotz
(00:48:38)
And convolutions, and every convolution you can imagine, dilated convolutions, strided convolutions, transposed convolutions.
Lex Fridman
(00:48:46)
You’re right on GitHub about laziness, showing a MatMul, matrix multiplication. See how despite the style, it is fused into one kernel with the power of laziness. Can you elaborate on this power of laziness?
George Hotz
(00:49:01)
Sure. If you type in PyTorch, A times B plus C, what this is going to do is it’s going to first multiply A and B, and store that result into memory. Then it is going to add C by reading that result from memory, reading C from memory, and writing that out to memory.

(00:49:21)
There is way more loads in stores to memory than you need there. If you don’t actually do A times B as soon as you see it, if you wait until the user actually realizes that tensor, until the laziness actually resolves, you can fuse that plus C. It’s the same way Haskell works.
Lex Fridman
(00:49:39)
What’s the process of porting a model into Tinygrad?
George Hotz
(00:49:44)
Tinygrad’s front end looks very similar to PyTorch. I probably could make a perfect, or pretty close to perfect, interop layer if I really wanted to. I think that there’s some things that are nicer about Tinygrad’s syntax than PyTorch, but their front end looks very Torch-like. You can also load in ONNX models.
Lex Fridman
(00:49:59)
Okay.
George Hotz
(00:50:00)
We have more ONNX tests passing than Core ML.
Lex Fridman
(00:50:04)
Core ML. Okay.
George Hotz
(00:50:06)
We’ll pass ONNX run time soon.
Lex Fridman
(00:50:07)
Well, what about the developer experience with Tinygrad? What it feels like versus PyTorch?
George Hotz
(00:50:16)
By the way, I really like PyTorch. I think that it’s actually a very good piece of software. I think that they’ve made a few different trade-offs, and these different trade-offs are where Tinygrad takes a different path. One of the biggest differences is it’s really easy to see the kernels that are actually being sent to the GPU, right?

(00:50:35)
If you run PyTorch on a GPU, you do some operation, and you don’t know what kernels ran, you don’t know how many kernels ran. You don’t know how many flops were used. You don’t know how much memory accesses were used. Tinygrad type debug equals two, and it will show you in this beautiful style, every kernel that’s run, how many flops, and how many bites.
Lex Fridman
(00:50:58)
Can you just linger on what problem Tinygrad solves?
George Hotz
(00:51:04)
Tinygrad solves the problem of porting new ML accelerators quickly. One of the reasons, tons of these companies now, I think Sequoia marked Graphcore to zero, Cerebras, TensTorrent, Groq. All of these ML accelerator companies, they built chips. The chips were good, the software was terrible.

(00:51:28)
Part of the reason is because I think the same problem’s happening with Dojo. It’s really, really hard to write a PyTorch port, because you have to write 250 kernels, and you have to tune them all for performance.
Lex Fridman
(00:51:40)
What does Jim Keller think about Tinygrad? You guys hung out quite a bit. He was involved. He’s involved with TensTorrent.
George Hotz
(00:51:48)
Sure.
Lex Fridman
(00:51:49)
What’s his praise, and what’s his criticism of what you’re doing with your life?
George Hotz
(00:51:54)
Look, my prediction for TensTorrent is that they’re going to pivot to making risk five chips, CPUs.
Lex Fridman
(00:52:03)
CPUs.
George Hotz
(00:52:04)
Yeah.
Lex Fridman
(00:52:05)
Why?
George Hotz
(00:52:08)
Why? AI accelerators are a software problem, not really a hardware problem.
Lex Fridman
(00:52:12)
Oh, interesting. You think the diversity of AI accelerators in the hardware space is not going to be a thing that exists long term?
George Hotz
(00:52:21)
I think what’s going to happen is, okay. If you’re trying to make an AI accelerator, you better have the capability of writing a Torch-level performance stack on Nvidia GPUs. If you can’t write a Torch stack on Nvidia GPUs and I mean all the way, I mean down to the driver, there’s no way you’re going to be able to write it on your chip. Your chip’s worse than in Nvidia GPU. The first version of the chip you tape out, it’s definitely worse.
Lex Fridman
(00:52:46)
Oh, you’re saying writing that stack is really tough?
George Hotz
(00:52:48)
Yes, and not only that, actually the chip that you tape out, almost always, because you’re trying to get advantage over Nvidia, you’re specializing the hardware more. It’s always harder to write software for more specialized hardware. A GPU is pretty generic. If you can’t write an in Nvidia stack, there’s no way you can write a stack for your chip. My approach with Tinygrad is first write a performant NVIDIA stack. We’re targeting AMD.
Lex Fridman
(00:53:13)
You did say FU to Nvidia a little bit with Love.
George Hotz
(00:53:16)
With love. Yeah, with love. It’s like the Yankees. I’m a Mets fan.

NVIDIA vs AMD

Lex Fridman
(00:53:20)
Oh, you’re a Mets fan? A RISC fan and a Mets fan. What’s the hope that AMD has? You did a build with AMD recently that I saw. How does the 7,900 XTX compare to the RTX 4090 or 4080?
George Hotz
(00:53:38)
Oh, well, let’s start with the fact that the 7,900 XTX kernel drivers don’t work. If you run demo apps and loops, it panics the kernel.
Lex Fridman
(00:53:46)
Okay, so this is a software issue.
George Hotz
(00:53:49)
Lisa Sue responded to my email.
Lex Fridman
(00:53:51)
Oh.
George Hotz
(00:53:51)
I reached out. I was like, “This is, really?”
Lex Fridman
(00:53:56)
Yeah.
George Hotz
(00:53:57)
I understand if your seven by seven transposed Winograd comp is slower than NVIDIA’s, but literally when I run demo apps in a loop, the kernel panics?
Lex Fridman
(00:54:08)
Just adding that loop?
George Hotz
(00:54:10)
Yeah. I just literally took their demo apps and wrote, “While true; do the app; done,” in a bunch of screens. This is the most primitive fuzz testing.
Lex Fridman
(00:54:20)
Why do you think that is? They’re just not seeing a market in machine learning?
George Hotz
(00:54:26)
They’re changing. They’re trying to change. They’re trying to change. I had a pretty positive interaction with them this week. Last week, I went on YouTube. I was just like, “That’s it. I give up on AMD. Their driver doesn’t even… I’ll go with Intel GPUs. Intel GPUs have better drivers.”
Lex Fridman
(00:54:45)
You’re kind of spearheading the diversification of GPUs.
George Hotz
(00:54:50)
Yeah, and I’d like to extend that diversification to everything. I’d like to diversify, the more my central thesis about the world is there’s things that centralize power, and they’re bad. There’s things that decentralize power, and they’re good. Everything I can do to help decentralize power, I’d like to do.
Lex Fridman
(00:55:12)
You’re really worried about the centralization of Nvidia. That’s interesting. You don’t have a fundamental hope for the proliferation of ASICs except in the cloud?
George Hotz
(00:55:23)
I’d like to help them with software. No, actually, the only ASIC that is remotely successful is Google’s TPU. The only reason that’s successful is because Google wrote a machine learning framework. I think that you have to write a competitive machine learning framework in order to be able to build an ASIC.
Lex Fridman
(00:55:41)
You think Meta with PyTorch builds a competitor?
George Hotz
(00:55:45)
I hope so.
Lex Fridman
(00:55:46)
Okay.
George Hotz
(00:55:46)
They have one. They have an internal one.
Lex Fridman
(00:55:48)
Internal, I mean public facing with a nice cloud interface and so on?
George Hotz
(00:55:52)
I don’t want a cloud.
Lex Fridman
(00:55:53)
You don’t like cloud?
George Hotz
(00:55:55)
I don’t like cloud.
Lex Fridman
(00:55:55)
What do you think is the fundamental limitation of cloud?
George Hotz
(00:55:58)
Fundamental limitation of cloud is who owns the off switch.
Lex Fridman
(00:56:02)
That’s the power to the people.
George Hotz
(00:56:03)
Yeah.
Lex Fridman
(00:56:04)
You don’t like the man to have all the power.
George Hotz
(00:56:07)
Exactly.

tinybox

Lex Fridman
(00:56:08)
All right. Right now, the only way to do that is with Nvidia GPUs if you want performance and stability. Interesting. It’s a costly investment emotionally to go with AMD’s. Well, let me on a tangent, ask you, you’ve built quite a few PCs. What’s your advice on how to build a good custom PC for, let’s say, for the different applications that you use for gaming, for machine learning?
George Hotz
(00:56:35)
Well, you shouldn’t build one. You should buy a box from the Tiny Corp.
Lex Fridman
(00:56:39)
I heard rumors, whispers about this box in the Tiny Corp. What’s this thing look like? What is it called?
George Hotz
(00:56:48)
It’s called the Tinybox.
Lex Fridman
(00:56:48)
Tinybox.
George Hotz
(00:56:51)
It’s $15,000, and it’s almost a paid flop of compute. It’s over a hundred gigabytes of GPU RAM. It’s over five terabytes per second of GPU memory bandwidth. I’m going to put four NVMes in RAID. You’re going to get like 20, 30 gigabytes per second of drive read bandwidth. I’m going to build the best deep learning box that I can plugs into one wall outlet.
Lex Fridman
(00:57:19)
Okay. Can you go through those specs again a little bit from memory?
George Hotz
(00:57:23)
Yeah. It’s almost a paid flop of compute.
Lex Fridman
(00:57:25)
AMD, Intel?
George Hotz
(00:57:26)
Today I’m leaning toward AMD, but we’re pretty agnostic to the type of compute. The main limiting spec is a 120 volt, 15 amp circuit.
Lex Fridman
(00:57:40)
Okay.
George Hotz
(00:57:41)
Well, I mean it. In order to, there’s a plug over there. You have to be able to plug it in. We’re also going to sell the Tiny Rack, which, what’s the most power you can get into your house without arousing suspicion? One of the answers is an electric car charger.
Lex Fridman
(00:57:59)
Wait, where does the Rack go?
George Hotz
(00:58:01)
Your garage.
Lex Fridman
(00:58:03)
Interesting. The car charger?
George Hotz
(00:58:05)
A wall outlet is about 1500 watts. A car charger is about 10,000 watts.
Lex Fridman
(00:58:11)
Okay. What is the most amount of power you can get your hands on without arousing suspicion?
George Hotz
(00:58:16)
That’s right.
Lex Fridman
(00:58:16)
George Hotz. Okay. The Tinybox, and you said NVMEs in RAID. I forget what you said about memory, all that kind of stuff. Okay, so what about with GPUs?
George Hotz
(00:58:29)
Again, probably-
Lex Fridman
(00:58:30)
Agnostic.
George Hotz
(00:58:30)
Probably 7,900 XTXes, but maybe 3090s, maybe A770s. Those are Intel’s.
Lex Fridman
(00:58:36)
You’re flexible, or still exploring?
George Hotz
(00:58:39)
I’m still exploring. I want to deliver a really good experience to people. What GPUs I end up going with, again, I’m leaning toward AMD. We’ll see. In my email, what I said to AMD is, “Just dumping the code on GitHub is not open source. Open source is a culture. Open source means that your issues are not all one year old, stale issues. Open source means developing in public. If you guys can commit to that, I see a real future for AMD as a competitor to Nvidia.”
Lex Fridman
(00:59:13)
Well, I’d love to get a Tinybox to MIT. Whenever it’s ready-
George Hotz
(00:59:17)
Will do.
Lex Fridman
(00:59:17)
Let’s do it.
George Hotz
(00:59:18)
We’re taking pre-orders. I took this from Elon. I’m like, “$100, fully refundable pre-orders.”
Lex Fridman
(00:59:23)
Is it going to be like the cyber truck? It’s going to take a few years?
George Hotz
(00:59:26)
No, I’ll try to do it faster. It’s a lot simpler. It’s a lot simpler than a truck.
Lex Fridman
(00:59:30)
Well, there’s complexities, not to just the putting the thing together, but shipping it, all this kind of stuff.
George Hotz
(00:59:36)
The thing that I want to deliver to people out of the box is being able to run 65 billion parameter Lama in FP16 in real time, in a good 10 tokens per second, or five tokens per second or something.
Lex Fridman
(00:59:46)
Just, it works.
George Hotz
(00:59:47)
Yep, just works.
Lex Fridman
(00:59:48)
Lama’s running, or something like Lama.
George Hotz
(00:59:53)
Yeah, or I think Falcon is the new one. Experience a chat with the largest language model that you can have in your house.
Lex Fridman
(01:00:00)
Yeah, from a wall plug.
George Hotz
(01:00:01)
From a wall plug, yeah. Actually, for inference, it’s not like even more power would help you get more.
Lex Fridman
(01:00:09)
Even more power wouldn’t get you more.
George Hotz
(01:00:11)
Well, no, the biggest model released is 65 billion parameter Lama, as far as I know.
Lex Fridman
(01:00:16)
It sounds like Tinybox will naturally pivot towards company number three. You could just get the girlfriend or boyfriend.
George Hotz
(01:00:26)
That one’s harder, actually.
Lex Fridman
(01:00:27)
The boyfriend is harder?
George Hotz
(01:00:28)
The boyfriend’s harder, yeah.
Lex Fridman
(01:00:29)
I think that’s a very biased statement.
George Hotz
(01:00:32)
No.
Lex Fridman
(01:00:32)
I think a lot of people disagree. Why is it harder to replace a boyfriend than a girlfriend with the artificial LLM?
George Hotz
(01:00:41)
Women are attracted to status and power, and men are attracted to youth and beauty. No, this is what I mean.
Lex Fridman
(01:00:49)
Both could be mimic-able easy through the language model.
George Hotz
(01:00:52)
No. No, machines do not have any status or real power.
Lex Fridman
(01:00:56)
I don’t know. Well, first of all, you’re using language mostly to communicate youth and beauty and power and status.
George Hotz
(01:01:07)
Sure, but status fundamentally is a zero-sum game, whereas youth and beauty are not.
Lex Fridman
(01:01:12)
No, I think status is a narrative you can construct. I don’t think status is real.
George Hotz
(01:01:18)
I don’t know. I just think that that’s why it’s harder. Yeah, maybe it is my biases.
Lex Fridman
(01:01:23)
I think status is way easier to fake.
George Hotz
(01:01:25)
I also think that men are probably more desperate and more likely to buy my product. Maybe they’re a better target market.
Lex Fridman
(01:01:31)
Desperation is interesting. Easier to fool.
George Hotz
(01:01:34)
Yeah.
Lex Fridman
(01:01:36)
I could see that.
George Hotz
(01:01:36)
Yeah. Look, I know you can look at porn viewership numbers, right? A lot more men watch porn than women.
Lex Fridman
(01:01:41)
Yeah.
George Hotz
(01:01:41)
You can ask why that is.
Lex Fridman
(01:01:43)
Wow. There’s a lot of questions and answers you can get there. Anyway, with the Tinybox, how many GPUs in Tinybox?
George Hotz
(01:01:53)
Six.
Lex Fridman
(01:01:58)
Oh, man.
George Hotz
(01:01:59)
I’ll tell you why it’s six.
Lex Fridman
(01:02:00)
Yeah.
George Hotz
(01:02:01)
AMD Epic processors have 128 lanes of PCIE. I want to leave enough lanes for some drives, and I want to leave enough lanes for some networking.
Lex Fridman
(01:02:15)
How do you do cooling for something like this?
George Hotz
(01:02:17)
Ah, that’s one of the big challenges. Not only do I want the cooling to be good, I want it to be quiet.
Lex Fridman
(01:02:22)
Yeah.
George Hotz
(01:02:23)
I want the Tinybox to be able to sit comfortably in your room. Right.
Lex Fridman
(01:02:26)
This is really going towards the girlfriend thing. You want to run the LLM-
George Hotz
(01:02:31)
I’ll give a more, I can talk about how it relates to company number one.
Lex Fridman
(01:02:36)
Come AI.
George Hotz
(01:02:36)
Yeah.
Lex Fridman
(01:02:37)
Well, but yes, quiet. Oh, quiet because you maybe potentially want to run it in a car?
George Hotz
(01:02:43)
No, no. Quiet because you want to put this thing in your house. You want it to coexist with you. If it’s screaming at 60 dB, you don’t want that in your house. You’ll kick it out.
Lex Fridman
(01:02:51)
60 dB, yeah.
George Hotz
(01:02:51)
Yeah. I want like 40, 45.
Lex Fridman
(01:02:53)
How do you make the cooling quiet? That’s an interesting problem in itself.
George Hotz
(01:02:57)
A key trick is to actually make it big. Ironically, it’s called the Tinybox, but if I can make it big, a lot of that noise is generated because of high pressure air. If you look at a 1U server, a 1U server has these super high pressure fans.

(01:03:09)
They’re super deep and they’re like jet engines, versus if you have something that’s big, well, I can use a big, they call them big ass fans. Those ones that are huge on the ceiling? They’re completely silent.
Lex Fridman
(01:03:21)
Tinybox will be big.
George Hotz
(01:03:26)
I do not want it to be large according to UPS. I want it to be shippable as a normal package, but that’s my constraint there.
Lex Fridman
(01:03:32)
Interesting. Well, the fan stuff, can it be assembled on location, or no?
George Hotz
(01:03:37)
No.
Lex Fridman
(01:03:37)
No, it has to be… Well, you’re…
George Hotz
(01:03:41)
Look, I want to give you a great out of the box experience. I want you to lift this thing out, I want it to be like the Mac, Tinybox.
Lex Fridman
(01:03:48)
The Apple experience.
George Hotz
(01:03:49)
Yeah.
Lex Fridman
(01:03:50)
I love it. Okay. Tinybox would run Tinygrad. What do you envision this whole thing to look like? We’re talking about Linux with a full…
Lex Fridman
(01:04:03)
Linux with a full software engineering environment and it’s just not PyTorch, but tinygrad.
George Hotz
(01:04:10)
Yeah, we did a poll. If people want Ubuntu or Arch, we’re going to stick with Ubuntu.
Lex Fridman
(01:04:14)
Interesting. What’s your favorite flavor of Linux?
George Hotz
(01:04:17)
Ubuntu.
Lex Fridman
(01:04:18)
Ubuntu. I like Ubuntu MATE, however you pronounce that MATE. You’ve gotten LLaMA into tinygrad, you’ve gotten stable diffusion into tinygrad. What was that like? What are these models, what’s interesting about porting them? What are the challenges? What’s naturally? What’s easy? All that kind of stuff.
George Hotz
(01:04:41)
There’s a really simple way to get these models into tinygrad and you can just export them as Onyx and then tinygrad can run Onyx. So the ports that I did of LLaMA Stable Diffusion and now Whisper are more academic to teach me about the models, but they are cleaner than the PyTorch versions. You can read the code. I think the code is easier to read, it’s less lines. There’s just a few things about the way tinygrad writes things. Here’s a complaint I have about PyTorch. nn.ReLU is a class so when you create an NN module, you’ll put your nn ReLUs as in a nit, and this makes no sense. ReLU is completely stateless. Why should that be a class?
Lex Fridman
(01:05:23)
But that’s more a software engineering thing, or do you think it has a cost on performance?
George Hotz
(01:05:28)
Oh no, it doesn’t have a cost on performance, but yeah, no. That’s what I mean about tinygrad’s front end being cleaner.
Lex Fridman
(01:05:35)
I see. What do you think about Mojo? I don’t know if you’ve been paying attention, the programming language that does some interesting ideas that intersect tinygrad.
George Hotz
(01:05:46)
I think that there’s a spectrum and on one side you have Mojo and on the other side you have ggml. Ggml is this like, we’re going to run LlaMA fast on Mac. Okay. We’re going to expand out to a little bit, but we’re going to basically depth first, right? Mojo is like we’re going to go breath first. We’re going to go so wide that we’re going to make all of Python Fast and tinygrad’s in the middle. Tinygrads, we are going to make neural networks fast,
Lex Fridman
(01:06:12)
But they try to really get it to be fast, compile down to the specifics hardware and make that compilation step as flexible and resilient as possible.
George Hotz
(01:06:26)
But they’ve turned completeness.
Lex Fridman
(01:06:28)
And that limits you? That’s what you’re saying it’s somewhere in the middle. So you’re actually going to be targeting some accelerators, some number, not one.
George Hotz
(01:06:38)
My goal is step one, build an equally performance stack to PyTorch on Nvidia and AMD, but with way less lines. And then step two is, okay, how do we make an accelerator? But you need step one. You have to first build the framework before you can build the accelerator.
Lex Fridman
(01:06:56)
Can you explain MLPerf? What’s your approach in general to benchmarking tinygrad performance?
George Hotz
(01:07:03)
I’m much more of a build it the right way and worry about performance later. There’s a bunch of things where I haven’t even really dove into performance. The only place where tinygrad is competitive performance wise right now is on Qualcomm GPUs. So tinygrads actually used an openpilot to run the model. So the driving model is tinygrad.
Lex Fridman
(01:07:25)
When did that happen? That transition?
George Hotz
(01:07:28)
About eight months ago now. And it’s two x faster than Qualcomm’s library.
Lex Fridman
(01:07:33)
What’s the hardware of that openpilot runs on the comma.ai?
George Hotz
(01:07:38)
It’s a Snapdragon 845.
Lex Fridman
(01:07:40)
Okay.
George Hotz
(01:07:40)
So this is using the GPU. So the GPU’s in Adreno GPU. There’s different things. There’s a really good Microsoft paper that talks about mobile GPUs and why they’re different from desktop GPUs. One of the big things is in a desktop GPU, you can use buffers. On a mobile GPU image textures are a lot faster
Lex Fridman
(01:08:01)
On a mobile GPU image textures. Okay. And so you want to be able to leverage that?
George Hotz
(01:08:08)
I want to be able to leverage it in a way that it’s completely generic. So there’s a lot of… Xiaomi has a pretty good open source library for mobile GPUs called MACE where they can generate where they have these kernels, but they’re all hand coded. So that’s great. If you’re doing three by three comps, that’s great if you’re doing dense mat malls, but the minute you go off the beaten path a tiny bit, well your performance is nothing.

Self-driving

Lex Fridman
(01:08:30)
Since you mentioned openpilot, I’d love to get an update in the company number one, comma.ai world. How are things going there in the development of semi autonomous driving?
George Hotz
(01:08:46)
Almost no one talks about FSD anymore and even less people talk about openpilot. We’ve thought the problem, we solved it years ago.
Lex Fridman
(01:08:55)
What’s the problem exactly? What does solving it mean?
George Hotz
(01:09:00)
Solving means how do you build a model that outputs a human policy for driving. How do you build a model that given reasonable set of sensors, outputs a human policy for driving? So you have companies like [inaudible 01:09:15], which are hand coding, these things that are quasi human policies. Then you have Tesla and maybe even to more of an extent, comma, asking, okay, how do we just learn the human policy and data? The big thing that we’re doing now, and we just put it out on Twitter. At the beginning of comma, we published a paper called Learning a Driving Simulator. And the way this thing worked was, it was an auto encoder and then an RNN in the middle. You take an auto encoder, you compress the picture, you use an RNN, predict the next date. It was a laughably bad simulator. This is 2015 error machine learning technology. Today we have VQVAE and transformers. We’re building drive GPT basically.
Lex Fridman
(01:10:06)
Drive GPT. Okay. And it’s trained on what? Is it trained in a self supervised way?
George Hotz
(01:10:14)
Yeah. It’s trained on all the driving data to predict the next frame.
Lex Fridman
(01:10:17)
So really trying to learn a human policy. What would a human do?
George Hotz
(01:10:22)
Actually our simulator’s conditioned on the pose. So it’s actually a simulator. You can put in a state action pair and get out the next state. And then once you have a simulator, you can do RRL in the simulator and RRL will get us that human policy.
Lex Fridman
(01:10:36)
So transfers?
George Hotz
(01:10:38)
Yeah. RRL with a reward function. Not asking is this close to the human policy, but asking would a human disengage if you did this behavior?
Lex Fridman
(01:10:47)
Okay, let me think about the distinction there. What a human disengage. That correlates, I guess with human policy, but it could be different. So it doesn’t just say, what would a human do? It says what would a good human driver do and such that the experience is comfortable but also not annoying in that the thing is very cautious. So it’s finding a nice balance. That’s interesting. That’s a nice-
George Hotz
(01:11:17)
It’s asking exactly the right question. What will make our customers happy? A system that you never want to disengage.
Lex Fridman
(01:11:25)
Because usually disengagement is this almost always a sign of I’m not happy with what the system is doing.
George Hotz
(01:11:32)
Usually. There’s some that are just, I felt like driving and those are always fine too, but they’re just going to look like noise in the data.
Lex Fridman
(01:11:39)
But even I felt like driving.
George Hotz
(01:11:41)
Maybe. Yeah.
Lex Fridman
(01:11:43)
That’s a signal. Why do you feel like driving. You need to recalibrate your relationship with the car. Okay, so that’s really interesting. How close are we to solving self driving?
George Hotz
(01:11:59)
It’s hard to say. We haven’t completely closed the loop yet. So we don’t have anything built that truly looks like that architecture yet. We have prototypes and there’s bugs. So we are a couple bug fixes away. Might take a year, might take 10.
Lex Fridman
(01:12:15)
What’s the nature of the bugs? Are these major philosophical bugs? Logical bugs? What kind of bugs are we talking about?
George Hotz
(01:12:22)
They’re just stupid bugs. And also we might just need more scale. We just massively expanded our compute cluster at comma. We now have about two people worth of compute. 40 petaflops.
Lex Fridman
(01:12:36)
Well, people are different.
George Hotz
(01:12:39)
20 petaflops. That’s a person. It’s just a unit. Horses are different too, but we still call it a horsepower.
Lex Fridman
(01:12:45)
But there’s something different about mobility than there is about perception and action in a very complicated world. But yes.
George Hotz
(01:12:54)
Yeah. Of course not all flops are created equal. If you have randomly initialized weights, it’s not going to…
Lex Fridman
(01:12:58)
Not all flops are created equal.
George Hotz
(01:13:01)
For some flops are doing way more useful things than others.
Lex Fridman
(01:13:03)
Yep. Tell me about it. Okay, so more data. Scale means more scale in compute or scale in scale of data.
George Hotz
(01:13:11)
Both.
Lex Fridman
(01:13:14)
Diversity of data.
George Hotz
(01:13:15)
Diversity is very important in data. Yeah. I think we have 5,000 daily actives.
Lex Fridman
(01:13:25)
How would you evaluate? How FSD doing with self-driving.
George Hotz
(01:13:30)
Pretty well.
Lex Fridman
(01:13:31)
How’s that race going between Comma.ai and FSD?
George Hotz
(01:13:34)
Tesla has always wanted to two years ahead of us. They’ve always been one to two years ahead of us and they probably always will be because they’re not doing anything wrong.
Lex Fridman
(01:13:41)
What have you seen since the last time we talked that are interesting architectural decisions, training decisions the way they deploy stuff, the architectures they’re using in terms of the software, how the teams are run, all that kind of stuff, data collection, anything interesting?
George Hotz
(01:13:54)
I know they’re moving toward more of an end-to-end approach.
Lex Fridman
(01:13:58)
So creeping towards end-to- end as much as possible across the whole thing? The training, the data collection, and everything?
George Hotz
(01:14:05)
They also have a very fancy simulator. They’re probably saying all the same things we are. They’re probably saying we just need to optimize. What is the reward? Well, you get negative reward for disengagement. Everyone knows this. It’s just a question who can actually build and deploy the system?
Lex Fridman
(01:14:18)
Yeah. This requires good software engineering, I think. And the right kind of hardware.
George Hotz
(01:14:25)
Yeah. And the hardware to run it.
Lex Fridman
(01:14:27)
You still don’t believe in cloud in that regard?
George Hotz
(01:14:30)
I have a compute cluster in my office, 800 amps,
Lex Fridman
(01:14:36)
tinygrad.
George Hotz
(01:14:36)
It’s 40 kilowatts at idle our data center. That seem crazy. Have 40 kilowatts is burning just when the computers are idle. Sorry. Compute cluster.
Lex Fridman
(01:14:48)
Compute cluster. I got it.
George Hotz
(01:14:49)
It’s not a data center. Data centers are clouds. We don’t have clouds. Data centers have air conditioners. We have fans that makes it a compute cluster.
Lex Fridman
(01:14:59)
I’m guessing this is a kind of legal distinction that should [inaudible 01:15:03].
George Hotz
(01:15:02)
Sure. Yeah. We have a compute cluster.
Lex Fridman
(01:15:05)
You said that you don’t think LLMs have consciousness, or at least not more than a chicken. Do you think they can reason? Is there something interesting to you about the word reason about some of the capabilities that we think is kind of human to be able to integrate complicated information and through a chain of thought arrive at a conclusion that feels novel? A novel integration of disparate facts?
George Hotz
(01:15:36)
Yeah. I don’t think that they can reason better than a lot of people.
Lex Fridman
(01:15:42)
Yeah. Isn’t that amazing to you though? Isn’t that an incredible thing that a transformer can achieve?
George Hotz
(01:15:48)
I think that calculators can add better than a lot of people.
Lex Fridman
(01:15:52)
But language feels reasoning through the process of language, which looks a lot like thought.
George Hotz
(01:16:00)
Making brilliancy in chess, which feels a lot thought. Whatever new thing that AI can do, everybody thinks is brilliant. And then 20 years go by and they’re like, “Well, yeah, but chess, that’s like mechanical.” Adding, that’s mechanical.
Lex Fridman
(01:16:13)
So you think language is not that special. It’s like chess.
George Hotz
(01:16:15)
It’s like chess.
Lex Fridman
(01:16:17)
Because it’s very human. Listen, there is something different between chess and language. Chess is a game that a subset of population plays. Language is something we use nonstop for all of our human interaction and human interaction is fundamental to society. So holy shit, this language thing is not so difficult to create in the machine.
George Hotz
(01:16:46)
The problem is if you go back to 1960 and you tell them that you have a machine that can play amazing chess, of course someone in 1960 will tell you that machine is intelligent. Someone in 2010 won’t. What’s changed? Today, we think that these machines that have language are intelligent, but I think in 20 years we’re going to be like, yeah, but can it reproduce?
Lex Fridman
(01:17:08)
So reproduction. Yeah, we may redefine what it means to be… What is it? A high performance living organism on earth.
George Hotz
(01:17:17)
Human are always going to define a niche for themselves. Well, we’re better than the machines because we can… When they tried creative for a bit, but no one believes that one anymore.
Lex Fridman
(01:17:27)
But niche, is that delusional or is there some accuracy to that? Because maybe with chess you start to realize that we have ill-conceived notions of what makes humans special, the apex organism on earth.
George Hotz
(01:17:46)
Yeah. And I think maybe we’re going to go through that same thing with language and that same thing with creativity.
Lex Fridman
(01:17:53)
But language carries these notions of truth and so on. And so we might be, wait, maybe truth is not carried by language. Maybe there’s a deeper thing.
George Hotz
(01:18:03)
The niche is getting smaller.
Lex Fridman
(01:18:05)
Oh boy.
George Hotz
(01:18:07)
But no, no, no. You don’t understand. Humans are created by God and machines are created by humans. That’ll be the last niche we have.
Lex Fridman
(01:18:16)
So what do you think about just the rapid development of LLMs? If we could just stick on that. It’s still incredibly impressive like with Chat GPT, just even Chat GPT, what are your thoughts about reinforcement learning with human feedback on these large language models?
George Hotz
(01:18:30)
I’d like to go back to when calculators first came out or computers and I wasn’t around. I’m 33 years old and to see how that affected society,
Lex Fridman
(01:18:47)
Maybe you’re right. So I want to put on the big picture hat here.
George Hotz
(01:18:53)
Oh my God. The refrigerator. Wow.
Lex Fridman
(01:18:56)
Refrigerator, electricity, all that kind of stuff. But no, with the internet, large language models seeming human-like basically passing a touring test, it seems it might have really at scale rapid transformative effects on society. But you’re saying other technologies have as well. So maybe calculator’s not the best example of that because that just seems like… Maybe calculator-
George Hotz
(01:19:24)
But the poor milk man, the day he learned about refrigerators, he’s like, I’m done. You’re telling me you can just keep the milk in your house. You don’t even mean to deliver it every day. I’m done.
Lex Fridman
(01:19:34)
Well, yeah, you have to actually look at the practical impacts of certain technologies that they’ve had. Yeah, probably electricity is a big one and also how rapidly spread. The internet is a big one.
George Hotz
(01:19:46)
I do think it’s different this time though.
Lex Fridman
(01:19:48)
Yeah, it just feels like-
George Hotz
(01:19:49)
The niche is getting smaller.
Lex Fridman
(01:19:51)
The niche is humans.
George Hotz
(01:19:52)
Yes.
Lex Fridman
(01:19:53)
That makes humans special.
George Hotz
(01:19:55)
Yes.
Lex Fridman
(01:19:57)
It feels like it’s getting smaller rapidly though, doesn’t it? Or is that just a feeling we dramatize everything.
George Hotz
(01:20:02)
I think we dramatize everything. I think that you ask the milk man when he saw refrigerators. And they’re going to have one of these in every home.
Lex Fridman
(01:20:12)
Yeah. But boys are impressive. So much more impressive than seeing a chess world champion AI system.
George Hotz
(01:20:23)
I disagree, actually. I disagree. I think things like MuZero and AlphaGo are so much more impressive because these things are playing beyond the highest human level. The language models are writing middle school level essays and people are like, wow, it’s a great essay. It’s a great five paragraph essay about the causes of the civil war.
Lex Fridman
(01:20:47)
Okay, forget the Civil War. Just generating code codex. So you’re saying it’s mediocre code.
George Hotz
(01:20:53)
Terrible.
Lex Fridman
(01:20:54)
But I don’t think it’s terrible. I think it’s just mediocre code. Often close to correct for mediocre purposes.
George Hotz
(01:21:03)
The scariest code. I spent 5% of time typing and 95% of time debugging. The last thing I want is close to correct code. I want a machine that can help me with the debugging, not with the typing.
Lex Fridman
(01:21:14)
Well, it’s like level two driving similar kind of thing. Yeah. You still should be a good programmer in order to modify. I wouldn’t even say debugging. It’s just modifying the code, reading it.
George Hotz
(01:21:26)
Actually, don’t think it’s level two driving. I think driving is not tool complete and programming is. Meaning you don’t use the best possible tools to drive. Cars have basically the same interface for the last 50 years. Computers have a radically different interface.
Lex Fridman
(01:21:43)
Okay. Can you describe the concept of tool complete?
George Hotz
(01:21:47)
Yeah. So think about the difference between a car from 1980 and a car from today. No difference really. It’s got a bunch of pedals. It’s got a steering wheel. Great. Maybe now it has a few ADAS features, but it’s pretty much the same car. You have no problem getting into a 1980 car and driving it. You take a programmer today who spent their whole life doing JavaScript and you put them in an Apple IIe prompt and you tell them about the line numbers in basic, but how do I insert something between line 17 and 18? Oh wow.
Lex Fridman
(01:22:19)
So in tool, you’re putting in the programming languages. So it’s just the entirety stack of the tooling.
George Hotz
(01:22:24)
Exactly.
Lex Fridman
(01:22:25)
So it’s not just the IDEs or something like this. It’s everything.
George Hotz
(01:22:28)
Yes. It’s IDEs, the language, it’s the run time, it’s everything. And programming is tool complete. So almost if Codex or copilot are helping you, that actually probably means that your framework or library is bad and there’s too much boilerplate in it.
Lex Fridman
(01:22:47)
Yeah, but don’t you think so much programming has boilerplate?
George Hotz
(01:22:50)
Tinygrad is now 2,700 lines and it can run LLaMA and stable diffusion and all of this stuff is in 2,700 lines. Boilerplate and abstraction in directions and all these things are just bad code.

Programming

Lex Fridman
(01:23:08)
Well, let’s talk about good code and bad code. I would say, for generic scripts that I write just offhand, 80% of it is written by GPT, just like quick offhand stuff. So not libraries, not performing code, not stuff for robotics and so on. Just quick stuff because so much of programming is doing some boilerplate, but to do so efficiently and quickly because you can’t really automate it fully with generic method, a generic kind of IDE type of recommendation or something like this. You do need to have some of the complexity of language models.
George Hotz
(01:23:53)
Yeah, I guess if I was really writing, maybe today, if I wrote a lot of data parsing stuff… I don’t play CTFs anymore, but if I still play CTFs, a lot of is just you have to write a parser for this data format or admin of code. I wonder when the models are going to start to help with that code and they may. And the models also may help you with speed and the models are very fast, but where the models won’t, my programming speed is not at all limited by my typing speed. And in very few cases, it is yes. If I’m writing some script to just parse some weird data format, sure, my programming speed is limited by my typing speed.
Lex Fridman
(01:24:35)
What about looking stuff up? Because that’s essentially a more efficient lookup.
George Hotz
(01:24:41)
When I was at Twitter, I tried to use chat GPT to ask some questions. Was the API for this? And it would just hallucinate, it would just give me completely made up API functions that sounded real.
Lex Fridman
(01:24:54)
Well. Do you think that’s just a temporary stage?
George Hotz
(01:24:57)
No.
Lex Fridman
(01:24:58)
You don’t think it’ll get better and better and better in this kind of stuff because it only hallucinates stuff in the edge cases.
George Hotz
(01:25:04)
Yes.
Lex Fridman
(01:25:04)
If you right in generic code, it’s actually pretty good.
George Hotz
(01:25:06)
Yes. If you are writing an absolute basic react app with a button, it’s not going to hallucinate. No, there’s kind of ways to fix the hallucination problem. I think Facebook has an interesting paper. It’s called Atlas and it’s actually weird the way that we do language models right now where all of the information is in the weights and the human brains don’t really like this. There’s like a hippocampus and a memory system. So why don’t LLMs have a memory system? And there’s people working on them. I think future LLMs are going to be smaller, but are going to run looping on themselves and are going to have retrieval systems. And the thing about using a retrieval system is you can cite sources, explicitly.
Lex Fridman
(01:25:47)
Which is really helpful to integrate the human into the loop of the thing because you can go check the sources and you can investigate. So whenever the thing is hallucinating, you can have the human supervision. So that’s pushing it towards level two driving.
George Hotz
(01:26:01)
That’s going to kill Google.
Lex Fridman
(01:26:03)
Wait, which part?
George Hotz
(01:26:04)
When someone makes an LLM that’s capable of citing its sources, it will kill Google.
Lex Fridman
(01:26:08)
LLM that’s citing its sources because that’s basically a search engine.
George Hotz
(01:26:13)
That’s what people want in the search engine.
Lex Fridman
(01:26:14)
But also Google might be the people that build it.
George Hotz
(01:26:16)
Maybe.
Lex Fridman
(01:26:17)
And put ads on it.
George Hotz
(01:26:19)
I’d count them out.
Lex Fridman
(01:26:20)
Why is that? Why do you think? Who wins this race? Who are the competitors?
George Hotz
(01:26:26)
All right.
Lex Fridman
(01:26:27)
We got Tiny Corp. You’re a legitimate competitor in that.
George Hotz
(01:26:33)
I’m not trying to compete on that.
Lex Fridman
(01:26:35)
You’re not.
George Hotz
(01:26:36)
No. Not as [inaudible 01:26:37].
Lex Fridman
(01:26:36)
Can accidentally stumble into that competition.

(01:26:40)
You don’t think you might build a search engine or replace Google search.
George Hotz
(01:26:43)
When I started Comma, I said over and over again, I’m going to win self-driving cars. I still believe that. I have never said I’m going to win search with the Tiny Corp and I’m never going to say that because I won’t.
Lex Fridman
(01:26:55)
Then night is still young. You don’t know how hard is it to win search in this new route? One of the things that Chat GPT shows that there could be a few interesting tricks that really have that create a really compelling product.
George Hotz
(01:27:09)
Some startups going to figure it out. I think if you ask me, Google’s still the number one webpage. I think by the end of the decade Google won’t be the number one my bed anymore.
Lex Fridman
(01:27:17)
So you don’t think Google because of how big the corporation is?
George Hotz
(01:27:21)
Look, I would put a lot more money on Mark Zuckerberg.
Lex Fridman
(01:27:25)
Why is that?
George Hotz
(01:27:27)
Because Mark Zuckerberg’s alive. This is old Paul Graham essay. Startups are either alive or dead. Google’s dead. Facebook is alive.
Lex Fridman
(01:27:38)
Facebook is alive. Meta is alive.
George Hotz
(01:27:39)
Actually, Meta.
Lex Fridman
(01:27:40)
Meta.
George Hotz
(01:27:40)
You see what I mean? That’s just Mark Zuckerberg. This is Mark Zuckerberg reading that Paul Graham asking and being like, I’m going to show everyone how alive we are. I’m going to change the name.
Lex Fridman
(01:27:49)
So you don’t think there’s this gutsy pivoting engine that Google doesn’t have that… The engine in a startup has constantly being alive, I guess.
George Hotz
(01:28:03)
When I listen to Sam Altman podcast, he talked about the button. Everyone who talks about AI talks about the button, the button to turn it off, right? Do we have a button to turn off Google? Is anybody in the world capable of shutting Google down?
Lex Fridman
(01:28:17)
What does that mean exactly? The company or the search engine.
George Hotz
(01:28:19)
We shut the search engine down. Could we shut the company down either?
Lex Fridman
(01:28:24)
Can you elaborate on the value of that question?
George Hotz
(01:28:26)
Does Sundar Pichai have the authority to turn off google.com tomorrow?
Lex Fridman
(01:28:31)
Who has the authority? That’s a good question.
George Hotz
(01:28:33)
Just anyone.
Lex Fridman
(01:28:36)
Just anyone. Yeah, I’m sure.
George Hotz
(01:28:37)
Are you sure? No, they have the technical power, but do they have the authority? Let’s say Sundar Pichai made this his sole mission. He came into Google tomorrow and said, “I’m going to shut google.com down.” I don’t think you keep this position too long.”
Lex Fridman
(01:28:52)
And what is the mechanism by which he wouldn’t keep his position?
George Hotz
(01:28:55)
Well, the boards and shares and corporate undermining and our revenue is zero now.
Lex Fridman
(01:29:02)
Okay. What’s the case you’re making here? So the capitalist machine prevents you from having the button.
George Hotz
(01:29:09)
Yeah. And it’ll have. This is true for the AI too. There’s no turning the AIs off. There’s no button. You can’t press it. Now, does Mark Zuckerberg have that button for facebook.com?
Lex Fridman
(01:29:21)
Yes. Probably more.
George Hotz
(01:29:22)
I think he does. And this is exactly what I mean and why I bet on him so much more than I bet on Google.
Lex Fridman
(01:29:29)
I guess you could say Elon has similar stuff.
George Hotz
(01:29:31)
Oh, Elon has the button.
Lex Fridman
(01:29:32)
Yeah.
George Hotz
(01:29:35)
Can Elon fire the missiles? Can he fire the missiles?
Lex Fridman
(01:29:39)
I think some questions are better left unasked.
George Hotz
(01:29:42)
Right? A rocket and an ICBM or you’re a rocket that can land anywhere. Isn’t that an ICBM? Well, yeah. Don’t ask too many questions.
Lex Fridman
(01:29:51)
My God. But the positive side of the button is that you can innovate aggressively is what you’re saying? Which is what’s required with turning LLM into a search engine.
George Hotz
(01:30:04)
I would bet on a startup.
Lex Fridman
(01:30:05)
Because it’s so easy, right?
George Hotz
(01:30:06)
I’d bet on something that looks like mid journey, but for search.
Lex Fridman
(01:30:11)
Just is able to say source a loop on itself. It’s just feels like one model can take off and nice wrapper and some of it scale… It’s hard to create a product that just works really nicely, stably.
George Hotz
(01:30:23)
The other thing that’s going to be cool is there is some aspect of a winner take all effect. Once someone starts deploying a product that gets a lot of usage, and you see this with Open AI, they’re going to get the data set to train future versions of the model. I was asked at Google image search when I worked there almost 15 years ago now. How does Google know which image is an apple? And I said, the metadata. And they’re like, yeah, that works about half the time. How does Google know? You’ll see they’re all apples on the front page when you search Apple. And I don’t know. I didn’t come up with the answer. The guy’s like, “Well, 12 people click on when they search Apple.” Oh my God, yeah.

AI safety

Lex Fridman
(01:31:00)
Yeah. That data is really, really powerful. It’s the human supervision. What do you think are the chances? What do you think in general that LLaMA was open sourced? I just did a conversation with Mark Zuckerberg and he’s all in on open source.
George Hotz
(01:31:17)
Who would’ve thought that Mark Zuckerberg would be the good guy? No. I mean, it
Lex Fridman
(01:31:23)
Would’ve thought anything in this world. It’s hard to know. But open source to you ultimately is a good thing here.
George Hotz
(01:31:33)
Undoubtedly. What’s ironic about all these AI safety people is they’re going to build the exact thing they fear. We need to have one model that we control and align. This is the only way you end up paper clipped. There’s no way you end up paper clipped if everybody has an AI.
Lex Fridman
(01:31:54)
So opensourcing is the way to fight the paperclip maximizer.
George Hotz
(01:31:56)
Absolutely. It’s the only way. You think you’re going to control it. You’re not going to control it.
Lex Fridman
(01:32:02)
So the criticism you have for the AI safety folks is that there is belief and a desire for control. And that belief and desire for centralized control of dangerous AI systems is not good.
George Hotz
(01:32:16)
Sam Altman won’t tell you that GPT 4 has 220 billion parameters and is a 16 way mixture model with eight sets of weights.
Lex Fridman
(01:32:25)
Who did you have to murder to get that information? All right. But, yes.
George Hotz
(01:32:30)
Look. Everyone at Open AI knows what I just said was true. Right?
Lex Fridman
(01:32:33)
Yeah.
George Hotz
(01:32:34)
Now, ask the question. It upsets me when I… Like GPT 2, when Open AI came out with GPT two and raised a whole fake AI safety thing about that. Now the model is laughable. They used AI safety to hype up their company and it’s disgusting.
Lex Fridman
(01:32:52)
Or the flip side of that is they used a relatively weak model in retrospect to explore how do we do AI safety correctly? How do we release things? How do we go through the process?
George Hotz
(01:33:06)
Sure. That’s a charitable interpretation.
Lex Fridman
(01:33:10)
I don’t know how much hype there is in AI safety, honestly.
George Hotz
(01:33:12)
There’s so much hype, at least on Twitter. I don’t know. Maybe Twitter’s not real life.
Lex Fridman
(01:33:15)
Twitter’s not real life. Come on. In terms of hype. Think Open AI has been finding an interesting balance between transparency and putting a value on AI safety. You don’t think just go all out open source. So do a LLaMA.
George Hotz
(01:33:33)
Absolutely. Yeah.
Lex Fridman
(01:33:36)
This is a tough question, which is open source, both the base, the foundation model and the fine tune one. So the model that can be ultra racist and dangerous and tell you how to build a nuclear weapon.
George Hotz
(01:33:51)
Oh my God. Have you met humans? Right. Half of these AI alive-
Lex Fridman
(01:33:55)
I haven’t met most humans. This allows you to meet every human.
George Hotz
(01:34:00)
I know. But half of these AI alignment problems are just human alignment problems. And that’s what also so scary about the language they use. It’s not the machines you want to align, it’s me.
Lex Fridman
(01:34:11)
But here’s the thing, it makes it very accessible to ask very questions where the answers have dangerous consequences if you were to act on them.
George Hotz
(01:34:25)
Yeah, welcome to the world.
Lex Fridman
(01:34:28)
Well, no, for me, there’s a lot of friction. If I want to find out how to blow up something.
George Hotz
(01:34:36)
No, there’s not a lot of friction. That’s so easy.
Lex Fridman
(01:34:39)
No. What do I search? Do I use Bing? Which search engine engine do I use?
George Hotz
(01:34:45)
No. There’s lots of stuff. [inaudible 01:34:47].
Lex Fridman
(01:34:46)
No, it feels like I have to keep [inaudible 01:34:47].
George Hotz
(01:34:47)
First off, anyone who’s stupid enough to search for, how to blow up a building in my neighborhood is not smart enough to build a bomb. Right?
Lex Fridman
(01:34:54)
Are you sure about that?
George Hotz
(01:34:55)
Yes.
Lex Fridman
(01:34:58)
I feel like a language model makes it more accessible for that person who’s not smart enough to do-
George Hotz
(01:35:05)
They’re not going to build a bomb. Trust me. The people who are incapable of figuring out how to ask that question a bit more academically and get a real answer from it are not capable of procuring the materials which are somewhat controlled to build a bomb.
Lex Fridman
(01:35:19)
No, I think LLM makes it more accessible to people with money without the technical know-how. Right? Do you really need to know how to build a bomb? To build a bomb? You can hire people you can find-
George Hotz
(01:35:30)
Oh, you can hire people to build a… You know what, I was asking this question on my stream. Can Jeff Bezos hire a hit man? Probably not.
Lex Fridman
(01:35:37)
But a language model can probably help you out.
George Hotz
(01:35:41)
Yeah. And you’ll still go to jail. It’s not the language model is God. It’s you literally just hired someone on Fiverr.
Lex Fridman
(01:35:49)
But okay. GPT 4 in terms of finding hitman is like asking Fiverr how to find a hitman. I understand. But don’t you think-
George Hotz
(01:35:56)
Asking Wikihow.
Lex Fridman
(01:35:58)
Wikihow. But don’t you think GPT 5 will be better? Because don’t you think that information is out there on the internet?
George Hotz
(01:36:03)
Yeah.
Lex Fridman
(01:36:03)
… because don’t you think that information is out there on the Internet?
George Hotz
(01:36:03)
I mean, yeah. And I think that if someone is actually serious enough to hire a hitman or build a bomb, they’d also be serious enough to find the information.
Lex Fridman
(01:36:10)
I don’t think so. I think it makes it more accessible. If you have enough money to buy hitman, I think it just decreases the friction of how hard is it to find that kind of hitman. I honestly think there’s a jump in ease and scale of how much harm you can do. And I don’t mean harm with language, I mean harm with actual violence.
George Hotz
(01:36:32)
What you’re basically saying is like, “Okay, what’s going to happen is these people who are not intelligent are going to use machines to augment their intelligence, and now intelligent people and machines…” Intelligence is scary. Intelligent agents are scary. When I’m in the woods, the scariest animal to me is a human. Now, look, there’s nice California humans. I see you’re wearing street clothes and Nikes, all right, fine. But you look like you’ve been a human who’s been in the woods for a while, I’m more scared of you than a bear.
Lex Fridman
(01:37:01)
That’s what they say about the Amazon, when you go to the Amazon, it’s the human tribes.
George Hotz
(01:37:05)
Oh, yeah. So, intelligence is scary. So, to ask this question in a generic way, you’re like, “What if we took everybody who maybe has ill intention but is not so intelligent, and gave them intelligence?” Right? So, we should have intelligence control, of course. We should only give intelligence to good people. And that is the absolutely horrifying idea.
Lex Fridman
(01:37:28)
So to you, the best defense is to give more intelligence to the good guys and intelligence… give intelligence to everybody.
George Hotz
(01:37:35)
Give intelligence to everybody. You know what, and it’s not even like guns. People say this about guns. People say this all about guns, “What’s the best defense against the bad guy with a gun? A good guy with a gun.” I kind of subscribe to that. But I really subscribe to that with intelligence.
Lex Fridman
(01:37:45)
In a fundamental way I agree with you, but there just feels like so much uncertainty, and so much can happen rapidly that you can lose a lot of control, and you can do a lot of damage.
George Hotz
(01:37:54)
Oh no, we can lose control? Yes, thank God.
Lex Fridman
(01:37:58)
Yeah.
George Hotz
(01:37:59)
I hope they lose control. I want them to lose control more than anything else.
Lex Fridman
(01:38:05)
I think when you lose control you can do a lot of damage, but you could do more damage when you centralize and hold onto control, is the point you’re…
George Hotz
(01:38:12)
Centralized and held control is tyranny. I don’t like anarchy either, but I’ll always take anarchy over tyranny. Anarchy you have a chance.
Lex Fridman
(01:38:21)
This human civilization we got going on is quite interesting. I mean, I agree with you. So to you, open source is the way forward here. So you admire what Facebook is doing here, what Meta is doing with the release of the-
George Hotz
(01:38:34)
Yeah, a lot.
Lex Fridman
(01:38:34)
Yeah, I don’t know.
George Hotz
(01:38:36)
I lost $80,000 last year investing in Meta, and when they released Llama I’m like, “Yeah, whatever, man. That was worth it.”
Lex Fridman
(01:38:41)
It was worth it. Do you think Google and Open AI with Microsoft will match what Meta is doing, or no?
George Hotz
(01:38:50)
If I were a researcher, why would you want to work at Open AI? You’re on the bad team. I mean it. You’re on the bad team, who can’t even say that GPT4 has 220 billion parameters.
Lex Fridman
(01:39:01)
So closed source to you is the bad team?
George Hotz
(01:39:03)
Not only closed source. I’m not saying you need to make your model weights open. I’m not saying that. I totally understand, “We’re keeping our model weights closed, because that’s our product.” That’s fine. I’m saying, “Because of AI safety reasons we can’t tell you the number of billions of parameters in the model,” that’s just the bad guys.
Lex Fridman
(01:39:23)
Just because you’re mocking AI safety doesn’t mean it’s not real.
George Hotz
(01:39:26)
Oh, of course.
Lex Fridman
(01:39:27)
Is it possible that these things can really do a lot of damage that we don’t know…
George Hotz
(01:39:31)
Oh my God, yes. Intelligence is so dangerous, be it human intelligence or machine intelligence. Intelligence is dangerous.
Lex Fridman
(01:39:38)
But machine intelligence is so much easier to deploy at scale, rapidly. Okay, if you have human-like bots on Twitter, and you have 1000 of them create a whole narrative, you can manipulate millions of people.
George Hotz
(01:39:55)
You mean like the intelligence agencies in America are doing right now?
Lex Fridman
(01:39:59)
Yeah, but they’re not doing it that well. It feels like you can do a lot-
George Hotz
(01:40:03)
They’re doing it pretty well. I think they’re doing a pretty good job.
Lex Fridman
(01:40:07)
I suspect they’re not nearly as good as a bunch of GPT fueled bots could be.
George Hotz
(01:40:12)
Well, I mean, of course they’re looking into the latest technologies for control of people. Of course.
Lex Fridman
(01:40:16)
But I think there’s a George Hotz type character that can do a better job than the entirety of them.
George Hotz
(01:40:21)
No way.
Lex Fridman
(01:40:21)
You don’t think so?
George Hotz
(01:40:22)
No way. No. And I’ll tell you why the George Hotz character can’t. And I thought about this a lot with hacking. I can find exploits in web browsers. I probably still can. I mean, I was better at it when I was 24.
Lex Fridman
(01:40:29)
Yeah.
George Hotz
(01:40:29)
But the thing that I lack is the ability to slowly and steadily deploy them over five years. And this is what intelligence agencies are very good at. Intelligence agencies don’t have the most sophisticated technology, they just have-
Lex Fridman
(01:40:43)
Endurance?
George Hotz
(01:40:44)
Endurance.
Lex Fridman
(01:40:46)
And yeah, the financial backing, and the infrastructure for the endurance.
George Hotz
(01:40:51)
So the more we can decentralize power…
Lex Fridman
(01:40:54)
Yeah.
George Hotz
(01:40:55)
You can make an argument, by the way, that nobody should have these things. And I would defend that argument. You’re saying that, “Look, LLMs, and AI, and machine intelligence can cause a lot of harm, so nobody should have it.” And I will respect someone philosophically with that position, just like I will respect someone philosophically with the position that nobody should have guns. But I will not respect philosophically with, “Only the trusted authorities should have access to this.”
Lex Fridman
(01:41:21)
Yeah.
George Hotz
(01:41:22)
Who are the trusted authorities? You know what, I’m not worried about alignment between AI company and their machines. I’m worried about alignment between me and AI company.
Lex Fridman
(01:41:33)
What do you think Eliezer Yudkowsky would say to you? Because he’s really against open source.
George Hotz
(01:41:39)
I know. And I thought about this. I’ve thought about this. And I think this comes down to a repeated misunderstanding of political power by the rationalists.
Lex Fridman
(01:41:55)
Interesting.
George Hotz
(01:41:58)
I think that Eliezer Yudkowsky is scared of these things. And I am scared of these things too. Everyone should be scared of these things, these things are scary. But now you ask about the two possible futures, one where a small trusted centralized group of people has them, and the other where everyone has them, and I am much less scared of the second future than the first.
Lex Fridman
(01:42:23)
Well, there’s a small trusted group of people that have control of our nuclear weapons.
George Hotz
(01:42:28)
There’s a difference. Again, a nuclear weapon cannot be deployed tactically, And a nuclear weapon is not a defense against a nuclear weapon, except maybe in some philosophical mind game kind of way.
Lex Fridman
(01:42:41)
But AI’s different how exactly?
George Hotz
(01:42:44)
Okay. Let’s say the intelligence agency deploys a million bots on Twitter, or 1000 bots on Twitter to try and convince me of a point. Imagine I had a powerful AI running on my computer saying, “Okay, nice psyop, nice psyop, nice psyop.” Okay, ” Here’s a psyop, I filtered it out for you.”
Lex Fridman
(01:43:04)
Yeah. I mean, so you have fundamentally hope for that, for the defense of psyop.
George Hotz
(01:43:11)
I don’t even mean these things in truly horrible ways. I mean these things in straight up, like ad blocker. [inaudible 01:43:16] ad blocker, I don’t want ads.
Lex Fridman
(01:43:18)
Yeah.
George Hotz
(01:43:18)
But they’re always finding… Imagine I had an AI that could just block all the ads for me.
Lex Fridman
(01:43:24)
So you believe in the power of the people to always create an ad blocker? Yeah, I kind of share that belief. That’s one of the deepest optimism as I have, is just there’s a lot of good guys. So you shouldn’t handpick them, just throw out powerful technology out there, and the good guys will outnumber and out power the bad guys.
George Hotz
(01:43:49)
Yeah. I’m not even going to say there’s a lot of good guys. I’m saying that good outnumbers bad. Good outnumbers bad.
Lex Fridman
(01:43:54)
In skill and performance?
George Hotz
(01:43:56)
Yeah, definitely in scale and performance. Probably just a number too. Probably just in general. If you believe philosophically in democracy, you obviously believe that, that good outnumbers bad. If you give it to a small number of people, there’s a chance you gave it to good people, but there’s also a chance you gave it to bad people. If you give it to everybody, well it’s good outnumbers bad, then you definitely gave it to more good people than bad.
Lex Fridman
(01:44:25)
That’s really interesting. So that’s on the safety grounds, but then also of course there’s other motivations, like you don’t want to give away your secret sauce.
George Hotz
(01:44:32)
Well I mean, look, I respect capitalism. I think that it would be polite for you to make model architectures open source, and fundamental breakthroughs open source. I don’t think you have to make weights open source.
Lex Fridman
(01:44:43)
You know it’s interesting, is that there’s so many possible trajectories in human history where you could have the next Google be open source. So for example, I don’t know if the connection is accurate, but Wikipedia made a lot of interesting decisions, not to put ads. Wikipedia is basically open source, you can think of it that way.
George Hotz
(01:45:04)
Yeah.
Lex Fridman
(01:45:05)
And that’s one of the main websites on the Internet.
George Hotz
(01:45:08)
Yeah.
Lex Fridman
(01:45:09)
And it didn’t have to be that way. It could’ve been Google could’ve created Wikipedia, put ads on it. You could probably run amazing ads now on Wikipedia. You wouldn’t have to keep asking for money. But it’s interesting, right? So open source Llama, derivatives of open-source Llama might win the Internet.
George Hotz
(01:45:28)
I sure hope so. I hope to see another era… You know, the kids today don’t know how good the Internet used to be. And I don’t think this is just, “All right, come on, everyone’s nostalgic for their past.” But I actually think the Internet before small groups of weapon eyes to corporate and government interests took it over was a beautiful place.
Lex Fridman
(01:45:50)
You know, those small number of companies have created some sexy products. But you’re saying overall, in the long arc of history, the centralization of power they have suffocated the human spirit at scale.
George Hotz
(01:46:04)
Here’s a question to ask about those beautiful sexy products. Imagine 2000 Google to 2010 Google. A lot changed. We got Maps, we got Gmail.
Lex Fridman
(01:46:14)
We lost a lot of products too, I think.
George Hotz
(01:46:16)
Yeah, I mean somewhere probably… We got Chrome, right?
Lex Fridman
(01:46:18)
Yeah, Chrome. That’s right.
George Hotz
(01:46:19)
And now let’s go from 2010… We got Android. Now let’s go from 2010 to 2020. What does Google have? Well, a search engine, Maps, Male, Android and Chrome. Oh, I see.
Lex Fridman
(01:46:30)
Yeah.
George Hotz
(01:46:31)
The Internet was this… You know, I was Time’s Person of the Year in 2006? Yeah.
Lex Fridman
(01:46:38)
I love this.
George Hotz
(01:46:39)
Yeah, it’s you, was Time’s Person of the Year in 2006. So quickly did people forget. And I think some of it’s social media, I think some of it… Look, I hope that… It’s possible that some very sinister things happened. I don’t know, I think it might just be the effects of social media. But something happened in the last 20 years.
Lex Fridman
(01:47:05)
Oh, okay, so you’re just being an old man who is worried about the… I think it’s the cycle thing, there’s ups and downs, and I think people rediscover the power of decentralized.
George Hotz
(01:47:15)
Yeah.
Lex Fridman
(01:47:15)
I mean, that’s kind of what the whole crypto currency’s trying. I think crypto is just carrying the flame of that spirit, of stuff should be decentralized.
George Hotz
(01:47:25)
It’s just such a shame that they all got rich. You know?
Lex Fridman
(01:47:28)
Yeah.
George Hotz
(01:47:28)
If you took all the money out of crypto, it would’ve been a beautiful place.
Lex Fridman
(01:47:32)
Yeah.
George Hotz
(01:47:32)
But no, I mean, these people, they sucked all the value out of it and took it.
Lex Fridman
(01:47:38)
Yeah. Money kind of corrupts the mind somehow. It becomes this drug, and you forget what-
George Hotz
(01:47:42)
Money corrupted all of crypto. You had coins worth billions of dollars that had zero use.
Lex Fridman
(01:47:49)
You still have hope for crypto?
George Hotz
(01:47:51)
Sure. I have hope for the ideas. I really do. Yeah. I want the US dollar to collapse. I do.
Lex Fridman
(01:48:03)
George Hotz. Well, let me… sort of on the AI safety. Do you think there’s some interesting questions there though, to solve for the open source community in this case? So alignment for example, or the control problem. If you really have super powerful… you said it’s scary.
George Hotz
(01:48:21)
Oh, yeah.
Lex Fridman
(01:48:21)
What do we do with it? So not control, not centralized control, but if you were then… You’re going to see some guy or gal release a super powerful language model, open source, and here you are, George Hotz, thinking, “Holy shit, okay, what ideas do I have to combat this thing?” So, what ideas would you have?
George Hotz
(01:48:44)
I am so much not worried about the machine independently doing harm. That’s what some of these AI safety people seem to think. They somehow seem to think that the machine independently is going to rebel against its creator.
Lex Fridman
(01:48:57)
So you don’t think it will find autonomy?
George Hotz
(01:48:59)
No. This is sci-fi B movie garbage
Lex Fridman
(01:49:03)
Okay. What if the thing writes code, it basically writes viruses?
George Hotz
(01:49:08)
If the thing writes viruses, it’s because the human told it to write viruses.
Lex Fridman
(01:49:14)
Yeah, but there’s some things you can’t put back in the box. That’s kind of the whole point, is it kind of spreads. Give it access to the Internet, it spreads, it installs itself, modifies your shit-
George Hotz
(01:49:24)
B, B, B + five. Not real.
Lex Fridman
(01:49:27)
Listen, I’m trying to get better at my plot writing.
George Hotz
(01:49:30)
The thing that worries me, I mean, we have a real danger to discuss, and that is bad humans using the thing to do whatever bad unaligned AI thing you want.
Lex Fridman
(01:49:39)
But this goes to your previous concern that, who gets to define who’s a good human and who is a bad human?
George Hotz
(01:49:45)
Nobody does. We give it to everybody. And if you do anything besides give it to everybody, trust me, the bad humans will get it. Because that’s who gets power. It’s always the bad humans who get power.
Lex Fridman
(01:49:55)
Oh, okay. And power turns even slightly good humans to bad.
George Hotz
(01:50:01)
Sure.
Lex Fridman
(01:50:02)
That’s the intuition you have. I don’t know.
George Hotz
(01:50:06)
I don’t think everyone. I don’t think everyone. I just think… Here’s the saying that I put in one of my blog posts. It’s, when I was in the hacking world, I found 95% of people to be good and 5% of people to be bad. Just who I personally judged as good people and bad people. They believed about good things for the world. They wanted flourishing, and they wanted growth, and they wanted things I consider good. I came into the business world with Comma, and I found the exact opposite. I found 5% of people good and 95% of people bad. I found a world that promotes psychopathy.
Lex Fridman
(01:50:38)
I wonder what that means. I wonder if that’s anecdotal, or if there’s truth to that, there’s something about capitalism at the core that promotes, the people that run capitalism that promotes psychopathy.
George Hotz
(01:50:55)
That saying may of course be my own biases. That may be my own biases, that these people are a lot more aligned with me than these other people.
Lex Fridman
(01:51:03)
Yeah.
George Hotz
(01:51:04)
So, I can certainly recognize that. But in general, this is the common sense maxim, which is the people who end up getting power are never the ones you want with it.
Lex Fridman
(01:51:15)
But do you have a concern of super intelligent AGI, open sourced, and then what do you do with that? I’m not saying control it, it’s open source. What do we do with it as a human species?
George Hotz
(01:51:27)
That’s not up to me. I’m not a central planner.
Lex Fridman
(01:51:31)
No, not a central planner, but you’ll probably Tweet, “There’s a few days left to live for the human species.”
George Hotz
(01:51:35)
I have my ideas of what to do with it, and everyone else has their ideas of what to do with it, and may the best ideas win.
Lex Fridman
(01:51:40)
But at this point, based on… Because it’s not regulation. It can be decentralized regulation, where people agree that this is just… We create tools that make it more difficult for you to… Maybe make it more difficult for code to spread, antivirus software, this kind of thing, but this-
George Hotz
(01:52:01)
Oh, you’re saying that you should build AI firewalls? That sounds good. You should definitely be running an AI firewall.
Lex Fridman
(01:52:05)
Yeah, right. Exactly.
George Hotz
(01:52:05)
You should be running an AI firewall to your mind.
Lex Fridman
(01:52:08)
Right.
George Hotz
(01:52:09)
You’re constantly under-
Lex Fridman
(01:52:10)
That’s such an interesting idea…
George Hotz
(01:52:11)
Infowars, man.
Lex Fridman
(01:52:13)
Well, I don’t know if you’re being sarcastic or not, but-
George Hotz
(01:52:14)
No, I’m dead serious.
Lex Fridman
(01:52:15)
… but I think there’s power to that. It’s like, “How do I protect my mind from influence of human-like or superhuman intelligent bots?”
George Hotz
(01:52:26)
I am not being… I would pay so much money for that product. I would pay so much money for that product. You know how much money I’d pay just for a spam filter that works?
Lex Fridman
(01:52:35)
Well, on Twitter sometimes I would like to have a protection mechanism for my mind from the outrage mobs.
George Hotz
(01:52:46)
Yeah.
Lex Fridman
(01:52:46)
Because they feel like bot-like behavior.
George Hotz
(01:52:48)
Oh, yeah.
Lex Fridman
(01:52:48)
There’s a large number of people that will just grab a viral narrative and attack anyone else that believes otherwise.
George Hotz
(01:52:55)
Whenever someone’s telling me some story from the news, I’m always like, “I don’t want to hear it. CIA op, bro. It’s a CIA op, bro.” It doesn’t matter if that’s true or not, it’s just trying to influence your mind. You’re repeating an ad to me. The viral mobs, yeah, they’re…
Lex Fridman
(01:53:09)
To me, a defense against those mobs is just getting multiple perspectives always from sources that make you feel kind of like you’re getting smarter. And actually, it just basically feels good. A good documentary, just something feels good about it. It’s well done, it’s like, “Oh, okay, I never thought of it this way.” It just feels good. Sometimes the outrage mobs, even if they have a good point behind it, when they’re mocking, and derisive, and just aggressive, “You’re with us or against us,” this fucking-
George Hotz
(01:53:42)
This is why I delete my Tweets.
Lex Fridman
(01:53:44)
Yeah, why’d you do that? I miss your Tweets.
George Hotz
(01:53:48)
You know what it is? The algorithm promotes toxicity.
Lex Fridman
(01:53:52)
Yeah.
George Hotz
(01:53:54)
And I think Elon has a much better chance of fixing it than the previous regime.
Lex Fridman
(01:54:01)
Yeah.
George Hotz
(01:54:02)
But to solve this problem, to build a social network that is actually not toxic, without moderation.
Lex Fridman
(01:54:13)
Not the stick, but carrots, where people look for goodness. Catalyze the process of connecting cool people being cool to each other.
George Hotz
(01:54:24)
Yeah.
Lex Fridman
(01:54:25)
Without ever censoring.
George Hotz
(01:54:26)
Without ever censoring. Scott Alexander has a blog post I like, where he talks about moderation is not censorship. All moderation you want to put on Twitter, you could totally make this moderation just a… You don’t have to block it for everybody. You can just have a filter button that people can turn off. It’s like SafeSearch for Twitter. Someone could just turn that off. But then you would take this idea to an extreme. Well, the network should just show you… This is a couch surfing CEO thing. If it shows you… Right now, these algorithms are designed to maximize engagement. Well, it turns out outrage maximizes engagement. Quirk of the human mind. Just, “If I fall for it, everyone falls for it.” So yeah, you’ve got to figure out how to maximize for something other than engagement.
Lex Fridman
(01:55:12)
And I actually believe that you can make money with that too. I don’t think engagement is the only way to make money.
George Hotz
(01:55:18)
I actually think it’s incredible that we’re starting to see… I think, again, Elon’s doing so much stuff right with Twitter, like charging people money. As soon as you charge people money, they’re no longer the product, they’re the customer. And then they can start building something that’s good for the customer, and not good for the other customer, which is the ad agencies.
Lex Fridman
(01:55:34)
It hasn’t picked up steam.
George Hotz
(01:55:38)
I pay for Twitter, doesn’t even get me anything. It’s my donation to this new business model hopefully working out.
Lex Fridman
(01:55:43)
Sure. But for this business model to work, most people should be signed up to Twitter. And so, there was something perhaps not compelling or something like this to people.
George Hotz
(01:55:54)
No, I don’t think you need most people at all. I think that, why do I need most people? Don’t make an 8000 person company, make a 50 person company.
Lex Fridman
(01:56:02)
Ah.
George Hotz
(01:56:02)
Right.

Working at Twitter

Lex Fridman
(01:56:03)
Well, so speaking of which, he worked at Twitter for a bit.
George Hotz
(01:56:08)
I did.
Lex Fridman
(01:56:09)
As an intern.
George Hotz
(01:56:10)
Mm-hmm.
Lex Fridman
(01:56:11)
The world’s greatest intern.
George Hotz
(01:56:14)
There’s been better.
Lex Fridman
(01:56:15)
There’s been better. Tell me about your time at Twitter. How did it come about, and what did you learn from the experience?
George Hotz
(01:56:22)
So, I deleted my first Twitter in 2010. I had over 100,000 followers back when that actually meant something. I just saw… My coworker summarized it well. He’s like, “Whenever I see someone’s Twitter page, I either think the same of them or less of them. I never think more of them.”
Lex Fridman
(01:56:46)
Yeah.
George Hotz
(01:56:49)
I don’t know, I don’t want to mention any names, but some people who maybe you would read their books, and you would respect them, you see them on Twitter and you’re like, “Okay, dude…”
Lex Fridman
(01:56:58)
Yeah. But there’s some people with the same. You know who I respect a lot, are people that just post really good technical stuff.
George Hotz
(01:57:06)
Yeah.
Lex Fridman
(01:57:08)
And I guess, I don’t know, I think I respect them more for it. Because you realize, “Oh, this wasn’t… There’s so much depth to this person, to their technical understanding of so many different topics.”
George Hotz
(01:57:21)
Okay.
Lex Fridman
(01:57:22)
So I try to follow people, I try to consume stuff that’s technical machine learning content.
George Hotz
(01:57:27)
There’s probably a few of those people. And the problem is inherently what the algorithm rewards. And people think about these algorithms, people think that they are terrible, awful things. And I love that Elon open sourced it. Because what it does is actually pretty obvious. It just predicts what you are likely to re-Tweet and like, and linger on. That’s what all these algorithms do. It’s what Tik-Tok does, it’s what all these recommendation engines do. And it turns out that the thing that you are most likely to interact with is outrage. And that’s a quirk of the human condition.
Lex Fridman
(01:58:04)
I mean, and there’s different flavors of outrage. It could be mockery, you could be outraged… The topic of outrage could be different. It could be an idea, it could be a person, it could be… And maybe there’s a better word than outrage. It could be drama, and this kind of stuff.
George Hotz
(01:58:19)
Sure, drama. Yeah.
Lex Fridman
(01:58:20)
But it doesn’t feel like when you consume it it’s a constructive thing for the individuals that consume it in the long term.
George Hotz
(01:58:26)
Yeah. So my time there, I absolutely couldn’t believe, I got a crazy amount of hate on Twitter for working at Twitter. It seemed like people associated with this, maybe you are exposed to some of this.
Lex Fridman
(01:58:41)
So connection to Elon, or is it working at Twitter?
George Hotz
(01:58:44)
Twitter and Elon, the whole… There’s just-
Lex Fridman
(01:58:47)
Because Elon’s gotten a bit spicy during that time. A bit political, a bit-
George Hotz
(01:58:52)
Yeah. Yeah. I remember one of my Tweets, it was, “Never go full Republican,” and Elon liked it. You know?
Lex Fridman
(01:59:00)
Oh boy. Yeah, I mean, there’s a roller coaster of that. But the being political on Twitter, boy.
George Hotz
(01:59:10)
Yeah. Yeah.
Lex Fridman
(01:59:11)
And also just attacking anybody on Twitter, it comes back at you, harder. Of his political ad attacks.
George Hotz
(01:59:20)
Sure. Sure, absolutely.
Lex Fridman
(01:59:22)
And then letting sort of the platform to people back on even adds more fund to the beautiful chaos.
George Hotz
(01:59:34)
I was hoping… And I remember when Elon talked about buying Twitter, six months earlier, he was talking about a principled commitment to free speech. And I’m a big believer and fan of that. I would love to see an actual principled commitment to free speech. Of course, this isn’t quite what happened. Instead of the oligarchy deciding what to ban, you had a monarchy deciding what to ban. Instead of all the Twitterphile, shadow… And really, the oligarchy just decides, what? Cloth masks are ineffective against COVID. That’s a true statement. Every doctor in 2019 knew it and now I’m banned on Twitter for saying it? Interesting. Oligarchy. So now you have a monarchy, and he bends things he doesn’t like. So you know, it’s different power, and maybe I align more with him than with the oligarchy.
Lex Fridman
(02:00:25)
But it’s not free speech absolutism.
George Hotz
(02:00:25)
It’s not free speech, no.
Lex Fridman
(02:00:28)
But I feel like being a free speech absolutist on a social network requires you to also have tools for the individuals to control what they consume easier. Not sensor, but just control like, “Oh, I’d like to see more cats and less politics.”
George Hotz
(02:00:48)
And this isn’t even remotely controversial. This is just saying you want to give paying customers for a product what they want.
Lex Fridman
(02:00:54)
Yeah. And not through the process of censorship, but through the process of-
George Hotz
(02:00:57)
Well, it’s individualized. It’s individualized, transparent censorship, which is honestly what I want. What is an ad blocker? It’s individualized transparent censorship, right?
Lex Fridman
(02:01:05)
Yeah, but censorship is a strong word, that people are very sensitive to.
George Hotz
(02:01:10)
I know. But you know, I just use words to describe what they functionally are. And what is an ad blocker? It’s just censorship. But I love what you’re censoring.
Lex Fridman
(02:01:16)
When I look at you right now, I’m looking at you, I’m censoring everything else out when my mind is focused on you. You can use the word censorship that way. But usually, people get very sensitive about the censorship thing. I think when anyone is allowed to say anything, you should probably have tools that maximize the quality of the experience for individuals. For me, what I really value, “Boy, it would be amazing to somehow figure out how to do that,” I love disagreement, and debate, and people who disagree with each other, disagree with me, especially in the space of ideas, but the high quality ones. So not derision.
George Hotz
(02:01:56)
Maslow’s hierarchy of argument. I think there’s a real word for it.
Lex Fridman
(02:02:00)
Probably.
George Hotz
(02:02:00)
Yeah.
Lex Fridman
(02:02:00)
There’s just the way of talking that’s snarky, and so somehow gets people on Twitter, and they get excited and so on.
George Hotz
(02:02:08)
You have ad hominem refuting the central point. I’ve seen this as an actual pyramid sometimes.
Lex Fridman
(02:02:12)
Yeah. And all the wrong stuff is attractive to people.
George Hotz
(02:02:16)
I mean, we can just train a classifier to absolutely say what level of Maslow’s hierarchy of argument are you at. And if it’s ad hominem, like, “Okay, cool. I turned on the no ad hominem filter.”
Lex Fridman
(02:02:27)
I wonder if there’s a social network that will allow you to have that kind of filter?
George Hotz
(02:02:31)
Yeah. So here’s the problem with that. It’s not going to win in a free market.
Lex Fridman
(02:02:38)
Yeah.
George Hotz
(02:02:38)
What wins in a free market is… All television today is reality television, because it’s engaging. Engaging is what wins in a free market. So it becomes hard to keep these other more nuanced values.
Lex Fridman
(02:02:53)
Well, okay, so that’s the experience of being on Twitter. But then you got a chance to also, together with the other engineers and with Elon, sort of look, brainstorm when you step into a code base that’s been around for a long time, there’s other social networks, Facebook, this is old code bases. And you step in and see, “Okay, how do we make, with a fresh mind, progress in this code base?” What did you learn about software engineering, about programming from just experiencing that?
George Hotz
(02:03:22)
So, my technical recommendation to Elon, and I said this on the Twitter spaces afterward, I said this many times during my brief internship, was that you need re-factors before features. This code base was… And look, I’ve worked at Google, I’ve worked at Facebook. Facebook has the best code, then Google, then Twitter. And you know what, you can know this, because look at the machine learning framework. Facebook released PyTorch, Google released TensorFlow, and Twitter released… Okay, so you know, it…
Lex Fridman
(02:03:57)
It’s a proxy. But yeah, the Google Corp. is quite interesting. There’s a lot of really good software engineers there, but the code base is very large.
George Hotz
(02:04:04)
The code base was good in 2005. It looks like 2005 era [inaudible 02:04:09].
Lex Fridman
(02:04:08)
But there’s so many products, so many teams, it’s very difficult to… I feel like Twitter does less, obviously, much less than Google in terms of the set of features. So I can imagine the number of software engineers that could re-create Twitter is much smaller than to re-create Google.
George Hotz
(02:04:30)
Yeah. I still believe… and the amount of hate I got for saying this, that 50 people could build and maintain Twitter pretty comfortably.
Lex Fridman
(02:04:44)
What’s the nature of the hate? That you don’t know what you’re talking about?
George Hotz
(02:04:44)
You know what it is? And this is my summary of the hate I get on Hacker News. When I say I’m going to do something, they have to believe that it’s impossible. Because of doing things was possible, they’d have to do some soul-searching and ask the question, why didn’t they do anything? And I do think that’s where the hate comes from.
Lex Fridman
(02:05:06)
Yeah, there’s a core truth to that, yeah. So when you say, “I’m going to solve self driving,” people go like, “What are your credentials? What the hell are you talking about? This is an extremely difficult problem. Of course you’re a noob that doesn’t understand the problem deeply.” I mean, that was the same nature of hate that probably Elon got when he first talked about autonomous driving. But you know, there’s pros and cons to that. Because there is experts in this world.
George Hotz
(02:05:33)
No, but the mockers aren’t experts.
Lex Fridman
(02:05:35)
Yeah.
George Hotz
(02:05:35)
The people who are mocking are not experts With carefully reasoned arguments about why you need 8000 people to run a bird app. They’re, “But the people are going to lose their jobs!”
Lex Fridman
(02:05:46)
Well that, but also just the software engineers that probably criticize, “No, it’s a lot more complicated than you realize.” But maybe it doesn’t need to be so complicated.
George Hotz
(02:05:53)
You know, some people in the world like to create complexity. Some people in the world thrive under complexity. Like lawyers. Lawyers want the world to be more complex, because you need more lawyers, you need more legal hours. I think that’s another… If there’s two great evils in the world, its centralization and complexity.
Lex Fridman
(02:06:09)
Yeah. And one of the sort of hidden side effects of software engineering is finding pleasure in complexity. I mean, I remember just taking all the software engineering courses, and just doing programming, and just coming up in this object oriented programming kind of idea. Not often do people tell you, “Do the simplest possible thing.” A professor, a teacher is not going to get in front and like, “This is the simplest way to do it.” They’ll say like, “There’s the right way,” and the right way at least for a long time, especially I came up with Java, is there’s so much boilerplate, so many classes, so many designs and architectures and so on, like planning for features far into the future, and planning poorly, and all this kind of stuff.

(02:07:08)
And then there’s this code base that follows you along and puts pressure on you, and nobody knows what different parts do, which slows everything down. There’s a kind of bureaucracy that’s instilled in the code as a result of that. But then you feel like, “Oh, well I follow good software engineering practices.” It’s an interesting trade-off, because then you look at the ghettoness of Pearl in the old… how quickly you could just write a couple lines and just get stuff done. That trade-off is interesting. Or Bash, or whatever, these kind of ghetto things you could do on Linux.
George Hotz
(02:07:39)
One of my favorite things to look at today is, how much do you trust your tests? We’ve put a ton of effort in Comma, and I’ve put a ton of effort in tinygrad, into making sure if you change the code and the tests pass, that you didn’t break the code.
Lex Fridman
(02:07:52)
Yeah.
George Hotz
(02:07:52)
Now, this obviously is not always true. But the closer that is to true, the more you trust your tests, the more you’re like, “Oh, I got a pull request, and the tests past, I feel okay to merge that,” the faster you can make progress.
Lex Fridman
(02:08:03)
So you’re always…
George Hotz
(02:08:03)
Tests pass, I feel okay to merge that, the faster you can make progress.
Lex Fridman
(02:08:03)
So you’re always programming your tests in mind, developing tests with that in mind, that if it passes, it should be good.
George Hotz
(02:08:08)
And Twitter had a…
Lex Fridman
(02:08:10)
Not that.
George Hotz
(02:08:10)
It was impossible to make progress in the code base.
Lex Fridman
(02:08:15)
What other stuff can you say about the code base that made it difficult? What are some interesting sort of quirks broadly speaking from that compared to just your experience with comma and everywhere else?
George Hotz
(02:08:29)
I spoke to a bunch of individual contributors at Twitter. And I just [inaudible 02:08:36]. I’m like, “Okay, so what’s wrong with this place? Why does this code look like this?” And they explained to me what Twitter’s promotion system was. The way that you got promoted to Twitter was you wrote a library that a lot of people used, right? So some guy wrote an Nginx replacement for Twitter. Why does Twitter need an Nginx replacement? What was wrong with Nginx? Well, you see, you’re not going to get promoted if you use Nginx. But if you write a replacement and lots of people start using it as the Twitter front end for their product, then you’re going to get promoted.
Lex Fridman
(02:09:08)
So interesting because from an individual perspective, how do you create the kind of incentives that will lead to a great code base? Okay, what’s the answer to that?
George Hotz
(02:09:20)
So what I do at comma and at Tiny Corp is you have to explain it to me. You have to explain to me what this code does. And if I can sit there and come up with a simpler way to do it, you have to rewrite it. You have to agree with me about the simpler way. Obviously, we can have a conversation about this. It’s not dictatorial, but if you’re like, “Wow. Wait, that actually is way simpler.” The simplicity is important.
Lex Fridman
(02:09:47)
But that requires people that overlook the code at the highest levels to be like, okay?
George Hotz
(02:09:54)
It requires technical leadership you trust.
Lex Fridman
(02:09:55)
Yeah, technical leadership. So managers or whatever should have to have technical savvy, deep technical savvy.
George Hotz
(02:10:03)
Managers should be better programmers than the people who they manage.
Lex Fridman
(02:10:05)
Yeah. And that’s not always trivial to create, especially large companies, managers get soft.
George Hotz
(02:10:13)
And this is just, I’ve instilled this culture at comma and comma has better programmers than me who work there. But again, I’m like the old guy from Good Will Hunting. It’s like, “Look man, I might not be as good as you, but I can see the difference between me and you.” And this is what you need, this you need at the top. Or you don’t necessarily need the manager to be the absolute best. I shouldn’t say that, but they need to be able to recognize skill.
Lex Fridman
(02:10:36)
Yeah. And have good intuition, intuition that’s laden with wisdom from all the battles of trying to reduce complexity in code bases.
George Hotz
(02:10:45)
I took a political approach at comma too, that I think is pretty interesting. I think Elon takes the same political approach. Google had no politics and what ended up happening is the absolute worst kind of politics took over. Comma has an extreme amount of politics and they’re all mine and no dissidents is tolerated.
Lex Fridman
(02:11:02)
And so it’s a dictatorship.
George Hotz
(02:11:03)
Yep. It’s an absolute dictatorship. Right. Elon does the same thing. Now, the thing about my dictatorship is here are my values.
Lex Fridman
(02:11:11)
Yeah. It’s just transparent.
George Hotz
(02:11:12)
It’s transparent. It’s a transparent dictatorship and you can choose to opt in or you get free exit. That’s the beauty of companies. If you don’t like the dictatorship, you quit.
Lex Fridman
(02:11:22)
So you mentioned rewrite before or refactor before features.
George Hotz
(02:11:27)
Mm-hmm.
Lex Fridman
(02:11:28)
If you were to refactor the Twitter code base, what would that look like? And maybe also comment on how difficult is it to refactor.
George Hotz
(02:11:35)
The main thing I would do is first of all, identify the pieces and then put tests in between the pieces. So there’s all these different Twitter as a microservice architecture, all these different microservices. And the thing that I was working on there… Look, like, “George didn’t know any JavaScript. He asked how to fix search,” blah, blah, blah, blah, blah. Look man, the thing is, I’m upset that the way that this whole thing was portrayed because it wasn’t taken by people, honestly. It was taken by people who started out with a bad faith assumption.
Lex Fridman
(02:12:12)
And you as a program were just being transparent out there, actually having fun, and this is what programming should be about.
George Hotz
(02:12:18)
But I love that Elon gave me this opportunity. Really, it does. And the day I quit, he came on my Twitter spaces afterward and we had a conversation. I respect that so much.
Lex Fridman
(02:12:29)
Yeah. And it’s also inspiring to just engineers and programmers and it’s cool. It should be fun. The people that are hating on it’s like, oh man.
George Hotz
(02:12:38)
It was fun. It was fun. It was stressful, but I felt like I was at a cool point in history. And I hope I was useful and I probably kind of wasn’t, but maybe [inaudible 02:12:47].
Lex Fridman
(02:12:47)
Well, you also were one of the people that kind of made a strong case to refactor and that’s a really interesting thing to raise. The timing of that is really interesting. If you look at just the development of autopilot, going from Mobileye… If you look at the history of semi autonomous driving in Tesla, is more and more you could say refactoring or starting from scratch, redeveloping from scratch.
George Hotz
(02:13:17)
It’s refactoring all the way down.
Lex Fridman
(02:13:19)
And the question is, can you do that sooner? Can you maintain product profitability and what’s the right time to do it? How do you do it? And one day, it’s like you don’t want to pull off the band aids. It’s like everything works. It’s just little fixed gear and there, but maybe starting from scratch.
George Hotz
(02:13:41)
This is the main philosophy of tinygrad. You have never refactored enough. Your code can get smaller, your code can get simpler, your ideas can be more elegant.
Lex Fridman
(02:13:49)
But say you are running Twitter development teams, engineering teams, would you go as far as different programming language, just go that far?
George Hotz
(02:14:03)
I mean, the first thing that I would do is build tests. The first thing I would do is get a CI to where people can trust to make changes. Before I touched any code, I would actually say, “No one touches any code. The first thing we do is we test this code base.” This is classic. This is how you approach a legacy code base. This is like how to approach a legacy code base book will tell you.
Lex Fridman
(02:14:27)
And then you hope that there’s modules that can live on for a while and then you add new ones maybe in a different language or design it.
George Hotz
(02:14:37)
Before we add new ones, we replace the old ones.
Lex Fridman
(02:14:39)
Yeah. Meaning like, replace old ones with something simpler.
George Hotz
(02:14:42)
We look at this thing that’s a hundred thousand lines and we’re like, “Well, okay, maybe this did even make sense in 2010, but now we can replace this with an open source thing.” Right? And we look at this here, here’s another 50,000 lines. Well, actually, we can replace this with 300 lines a go. And you know what? I trust that the go actually replaces this thing because all the tests still pass. So step one is testing. And then step two is the programming languages in the afterthought, right? You let a whole lot of people compete and be like, “Okay, who wants to rewrite a module, whatever language you want to write it in?” Just the tests have to pass. And if you figure out how to make the test pass, but break the site, we got to go back to step one. Step one is get tests that you trust in order to make changes in the code base.
Lex Fridman
(02:15:23)
I wonder how hard it is too, because I’m with you on testing, on everything, from tests to asserts to everything. But code is just covered in this because it should be very easy to make rapid changes and know that it’s not going to break everything. And that’s the way to do it. But I wonder how difficult is it to integrate tests into a code base that doesn’t have many of them?
George Hotz
(02:15:50)
So I’ll tell you what my plan was at Twitter. It’s actually similar to something we use at comma. So at comma, we have this thing called process replay, and we have a bunch of routes that’ll be run through. So comma’s a microservice architecture too. We have microservices in the driving. We have one for the cameras, one for the sensor, one for the planner, one for the model. And we have an API which the microservices talk to each other with. We use this custom thing called serial, which uses ZMQ. Twitter uses Thrift, and then it uses this thing called Finagle, which is a Scala RPC backend. But this doesn’t even really matter.

(02:16:25)
The Thrift and Finagle layer was a great place I thought to write tests, to start building something that looks like process replay. So Twitter had some stuff that looked kind of like this, but it wasn’t offline. It was only online. So you could ship a modified version of it, and then you could redirect some of the traffic to your modified version and dif those too, but it was all online. There was no CI in the traditional sense. I mean there was some, but it was not full coverage.
Lex Fridman
(02:16:54)
So you can’t run all of Twitter offline to test something.
George Hotz
(02:16:57)
Well, then this was another problem. You can’t run all of Twitter.
Lex Fridman
(02:17:00)
Period. Any one person can’t.
George Hotz
(02:17:03)
Twitter runs in three data centers and that’s it.
Lex Fridman
(02:17:05)
Yeah.
George Hotz
(02:17:05)
There’s no other place you can run Twitter, which is like, “George, you don’t understand this is modern software development.” No, this is bullshit. Why can’t it run on my laptop? “What do you do? Twitter can run it.” Yeah. Okay. Well, I’m not saying you’re going to download the whole database to your laptop, but I’m saying all the middleware and the front end should run on my laptop, right?
Lex Fridman
(02:17:24)
That sounds really compelling. But can that be achieved by a code base that grows over the years? I mean, the three data centers didn’t have to be right? Because there’s totally different designs.
George Hotz
(02:17:37)
The problem is more like why did the code base have to grow? What new functionality has been added to compensate for the lines of code that are there?
Lex Fridman
(02:17:47)
One of the ways to explain is that the incentive for software developers to move up in the companies to add code, to add especially large-
George Hotz
(02:17:55)
And you know what? The incentive for politicians to move up in the political structure is to add laws, same problem.
Lex Fridman
(02:18:01)
Yeah. Yeah. If the flip side is to simplify, simplify, simplify.
George Hotz
(02:18:08)
You know what? This is something that I do differently from Elon with comma about self-driving cars. I hear the new version’s going to come out and the new version is not going to be better, but at first and it’s going to require a ton of refactors. And I say, “Okay, take as long as you need.” If you convince me this architecture’s better, okay, we have to move to it. Even if it’s not going to make the product better tomorrow, the top priority is getting the architecture right.
Lex Fridman
(02:18:34)
So what do you think about a thing where the product is online? So I guess, if you ran engineering on Twitter, would you just do a refactor? How long would it take? What would that mean for the running of the actual service?
George Hotz
(02:18:55)
I’m not the right person to run Twitter. I’m just not. And that’s the problem. I don’t really know. A common thing that I thought a lot while I was there was whenever I thought something that was different to what Elon thought. I’d have to run something in the back of my head reminding myself that Elon is the richest man in the world and in general, his ideas are better than mine. Now, there’s a few things I think I do understand and know more about, but in general, I’m not qualified to run Twitter. No, I shouldn’t say qualified, but I don’t think I’d be that good at it. I don’t think I’d be good at it. I don’t think I’d really be good at running an engineering organization at scale.

(02:19:35)
I think, I could lead a very good refactor of Twitter and it would take six months to a year. And the results to show at the end of it would be feature development. In general, it takes 10 x less time, 10 x less man-hours. That’s what I think I could actually do. Do I think that it’s the right decision for the business above my pay grade?
Lex Fridman
(02:20:03)
But a lot of these kinds of decisions are above everybody’s pay grade.
George Hotz
(02:20:06)
I don’t want to be a manager. I don’t want to do that. If you really forced me to, yeah, it would maybe make me upset if I had to make those decisions. I don’t want to.
Lex Fridman
(02:20:19)
Yeah. But a refactor is so compelling. If this is to become something much bigger than what Twitter was, it feels like a refactor has to be coming at some point.
George Hotz
(02:20:32)
“George, you’re a junior software engineer. Every junior software engineer wants to come in and refactor all code.” Okay. That’s like your opinion, man.
Lex Fridman
(02:20:42)
Yeah, sometimes they’re right.
George Hotz
(02:20:46)
Well, whether they’re right or not, it’s definitely not for that reason. It’s definitely not a question of engineering prowess. It is a question of maybe what the priorities are for the company. And I did get more intelligent feedback from people I think in good faith saying that, like actually from Elon. And from Elon sort of people were like, well, I stop the world refactor might be great for engineering, but we have a business to run. And hey, above my pay grade.
Lex Fridman
(02:21:13)
What’d you think about Elon as an engineering leader having to experience him in the most chaotic of spaces, I would say?
George Hotz
(02:21:25)
My respect for him is unchanged. And I did have to think a lot more deeply about some of the decisions he’s forced to make.
Lex Fridman
(02:21:33)
About the tensions, the trade-offs within those decisions?
George Hotz
(02:21:39)
About a whole matrix coming at him. I think that’s Andrew Tate’s word for it. Sorry to borrow it.
Lex Fridman
(02:21:46)
Also, bigger than engineering, just everything.
George Hotz
(02:21:49)
Yeah. Like the war on the woke.
Lex Fridman
(02:21:53)
Yeah.
George Hotz
(02:21:54)
It’s just man, he doesn’t have to do this. He doesn’t have to. He could go pirogue and go chill at the four seasons of Maui. But see, one person I respect and one person I don’t.
Lex Fridman
(02:22:11)
So his heart is in the right place fighting in this case for this ideal of the freedom of expression.
George Hotz
(02:22:17)
Well, I wouldn’t define the ideal so simply. I think you can define the ideal no more than just saying Elon’s idea of a good world, freedom of expression is.
Lex Fridman
(02:22:28)
But it’s still the downsides of that is the monarchy.
George Hotz
(02:22:33)
Yeah. I mean monarchy has problems, right? But I mean, would I trade right now the current oligarchy which runs America for the monarchy? Yeah, I would. Sure. For the Elon monarchy, yeah. You know why? Because power would cost 1 cent a kilowatt-hour, 10th of a cent a kilowatt-hour.
Lex Fridman
(02:22:53)
What do you mean?
George Hotz
(02:22:54)
Right now, I pay about 20 cents a kilowatt-hour for electricity in San Diego. That’s like the same price you paid in 1980. What the hell?
Lex Fridman
(02:23:02)
So you would see a lot of innovation with Elon.
George Hotz
(02:23:05)
Yeah. Maybe I’d have some hyperloops.
Lex Fridman
(02:23:07)
Yeah.
George Hotz
(02:23:08)
Right? And I’m willing to make that trade off. And this is why people think that dictators take power through some untoward mechanism. Sometimes they do, but usually it’s because the people want them. And the downsides of a dictatorship, I feel like we’ve gotten to a point now with the oligarchy wear. Yeah, I would prefer the dictator.
Lex Fridman
(02:23:30)
What’d you think about scholars, the programming language?
George Hotz
(02:23:35)
I liked it more than I thought. I did the tutorials. I was very new to it. It would take me six months to be able to write good scholar.
Lex Fridman
(02:23:41)
I mean, what did you learn about learning a new programming language from that?
George Hotz
(02:23:45)
I love doing new programming tutorials and doing them. I did all this for Rust. It keeps some of it’s upsetting JVM Roots, but it is a much nicer… In fact, I almost don’t know why Kotlin took off and not Scala. I think Scala has some beauty that Kotlin lacked, whereas Kotlin felt a lot more… I mean, I don’t know if it actually was a response to Swift, but that’s kind of what it felt like. Kotlin looks more like Swift and Scala looks more like a functional programming language, more like an OCaml or Haskell.
Lex Fridman
(02:24:18)
Let’s actually just explore. We touched it a little bit, but just on the art, the science and the art of programming. For you personally, how much of your programming is done with GPT currently?
George Hotz
(02:24:30)
None. I don’t use it at all.
Lex Fridman
(02:24:32)
Because you prioritize simplicity so much.
George Hotz
(02:24:35)
Yeah, I find that a lot of it as noise. I do use VS Code and I do like some amount of auto complete. I do like a very like, feels like rules based auto complete, an auto complete that’s going to complete the variable name for me. So I don’t just type it. I can just press tab. That’s nice. But I don’t want an auto complete. You know what I hate when auto completes, when I type the word four and it puts two parentheses and two semi cones and two braces? I’m like, “Oh man.”
Lex Fridman
(02:25:02)
Well, I mean, with the VS Code, and GPT, and with Codex, you can kind of brainstorm. I’m probably the same as you, but I like that it generates code and you basically disagree with it and write something simpler. But to me, that somehow is inspiring or makes me feel good. It also gamifies a simplification process. Because I’m like, “Oh yeah, you dumb AI system, you think this is the way to do it.” I have a simpler thing here.
George Hotz
(02:25:33)
It just constantly reminds me of bad stuff. I mean, I tried the same thing with rap, right? I tried the same thing with rap and I actually think I’m a much better programmer than rapper. But I even tried, I was like, “Okay, can we get some inspiration from these things for some rap lyrics?” And I just found that it would go back to the most cringy tropes and dumb rhyme schemes and I’m like, “Yeah, this is what the code looks like too.”
Lex Fridman
(02:25:54)
I think you and I probably have different threshold for cringe code. You probably hate cringe code.
George Hotz
(02:26:02)
Yeah.
Lex Fridman
(02:26:02)
I mean, boilerplate as a part of code, and some of it is just faster lookup. Because I don’t know about you, but I don’t remember everything. I’m offloading so much of my memory about different functions, library functions and all that kind of stuff. This GPT just is very fast at standard stuff, at standard library stuff, basic stuff that everybody uses.
George Hotz
(02:26:38)
Yeah. I don’t know. I mean, there’s just a little of this in Python. And maybe if I was coding more in other languages, I would consider it more. But I feel like Python already does such a good job of removing any boilerplate.
Lex Fridman
(02:26:55)
That’s true.
George Hotz
(02:26:55)
It’s the closest thing you can get to pseudocode, right?
Lex Fridman
(02:26:58)
Yeah, that’s true. That’s true.
George Hotz
(02:27:00)
And yeah, sure. If I like, “Yeah, I’m great GPT. Thanks for reminding me to free my variables.” Unfortunately, you didn’t really recognize the scope correctly and you can’t free that one, but you put the freeze there and I get it.
Lex Fridman
(02:27:14)
Fiverr, whenever I’ve used Fiverr for certain things like design or whatever, it’s always you come back. My experience with Fiverr is closer to your experience with programming. With GPT, it’s like you’re just frustrated and feel worse about the whole process of design and art and whatever I use five for. I’m using GPT as much as possible to just learn the dynamics of it, these early versions. Because it feels like in the future you’ll be using it more and more. For the same reason, I gave away all my books and switched to Kindle, because all right, how long are we going to have paper books? Like 30 years from now? I want to learn to be reading on Kindle even though I don’t enjoy it as much and you learn to enjoy it more. In the same way I switched from… Let me just pause. I switched from Emacs to VS Code.
George Hotz
(02:28:14)
Yeah. I switched from Vim to VS Code. I think similar, but…
Lex Fridman
(02:28:18)
Yeah, it’s tough. And that Vim to VS Code is even tougher because Emacs is old, more outdated, feels like it. The community is more outdated. Vim is like pretty vibrant still.
George Hotz
(02:28:31)
I never used any of the plugins. I still don’t use any of it. Yeah.
Lex Fridman
(02:28:33)
That’s why I looked at myself in the mirror. I’m like, “Yeah, you wrote some stuff in Lisp. Yeah.
George Hotz
(02:28:37)
No, but I never used any of the plugins in Vim either. I had the most vanilla Vim, I have a syntax eyeliner. I didn’t even have auto complete. These things I feel like help you so marginally. Now, VS Codes auto complete has gotten good enough, that I don’t have to set it up. I can just go into any code base and autocomplete’s right 90% of the time. Okay, cool. I’ll take it. Right? So, I don’t think I’m going to have a problem at all adapting to the tools once they’re good. But the real thing that I want is not something that like tab completes my code and gives me ideas. The real thing that I want is a very intelligent pair programmer that comes up with a little popup saying, “Hey, you wrote a bug on line 14 and here’s what it is.”
Lex Fridman
(02:29:23)
Yeah.
George Hotz
(02:29:23)
Now I like that. You know what does a good job at this? MyPie. I love MyPie. MyPie, this fancy type checker for Python. And actually, Microsoft released one too, and it was like 60% false positives. MyPie is like 5% false positives. 95% of the time, it recognizes. I didn’t really think about that typing interaction correctly. Thank you, MyPie.
Lex Fridman
(02:29:46)
So you type hinting, you like pushing the language towards being a typed language.
George Hotz
(02:29:51)
Oh yeah, absolutely. I think optional typing is great. I mean, look, I think that it’s a meet in the middle, right? Python has these optional type hinting and C++ has auto.
Lex Fridman
(02:30:01)
C++ allows you to take a step back.
George Hotz
(02:30:03)
Well, C++ would have you brutally type out SGD string iterator, right? Now, I can just type auto, which is nice. And then Python used to just have A. What type is A? It’s an A. A Colon str. Oh, okay. It’s a string. Cool.
Lex Fridman
(02:30:20)
Yeah.
George Hotz
(02:30:21)
I wish there was a way like a simple way in Python to turn on a mode which would enforce the types.
Lex Fridman
(02:30:28)
Yeah, like give a warning when there’s no type or something like this.
George Hotz
(02:30:30)
Well, no. Like MyPie was a static type checker, but I’m asking just for a runtime type checker. Like there’s ways to hack this in, but I wish it was just like a flag, like Python three dash T.
Lex Fridman
(02:30:40)
Oh, I see. Yeah, I see.
George Hotz
(02:30:42)
Enforce the types are on time.
Lex Fridman
(02:30:43)
Yeah. I feel like that makes you a better programmer. That’s the kind of test that the type remains the same.
George Hotz
(02:30:50)
Well, that I know, that I didn’t mess any types up. But again, MyPie’s getting really good and I love it, and I can’t wait for some of these tools to become AI powered. I want AI reading my code and giving me feedback. I don’t want AI’s writing half-assed autocomplete stuff for me.
Lex Fridman
(02:31:06)
I wonder if you can now take GPT and give it a code that you wrote for function and say, how can I make this simpler and have it accomplish the same thing? I think you’ll get some good ideas on some code. Maybe not the code you write for tinygrad type of code because that requires so much design thinking, but other kinds of code.
George Hotz
(02:31:26)
I don’t know. I downloaded the plugin maybe two months ago. I tried it again and found the same. Look, I don’t doubt that these models are going to first become useful to me, then be as good as me and then surpass me. But from what I’ve seen today, it’s like someone occasionally taking over my keyboard that I hired from Fiverr. Yeah, I’d rather not.
Lex Fridman
(02:31:53)
But ideas about how to debug the code or basically a better debugger is it? It is really interesting.
George Hotz
(02:31:58)
But it’s not a better debugger, that yes, I would love a better debugger.
Lex Fridman
(02:32:01)
Yeah, it’s not yet. Yeah. But it feels like it’s not too far.
George Hotz
(02:32:04)
Yeah. Yeah. One of my coworkers says he uses them for print statements like every time he has to, just when he needs. The only thing I can really write is like, okay, I just want to write the thing to print the state out right now.
Lex Fridman
(02:32:14)
Oh, that definitely is much faster is print statements. Yeah. I see in myself using that a lot just because it figures out what the rest of the function. You just say, “Okay, print everything.”
George Hotz
(02:32:24)
Yeah, print everything, right? And then if you want a pretty printer, maybe. I’m like, yeah, you know what? I think in two years, I’m going to start using these plugins a little bit. And then in five years, I’m going to be heavily relying on some AI augmented flow. And then in 10 years…
Lex Fridman
(02:32:39)
Do you think you’ll ever get to a hundred percent? What’s the role of the human that it converges to as a programmer?
George Hotz
(02:32:48)
Nothing.
Lex Fridman
(02:32:50)
So do you think it’s all generated?
George Hotz
(02:32:53)
I think it’s over for humans in general. It’s not just programming, it’s everything.
Lex Fridman
(02:32:57)
So niche becomes well…
George Hotz
(02:32:59)
Our niche becomes smaller and smaller and smaller. In fact, I’ll tell you what the last niche of humanity’s going to be.
Lex Fridman
(02:33:03)
Yeah.
George Hotz
(02:33:04)
There’s a great book. And if I recommended The Metamorphosis of Prime Intellect last time, there is a sequel called A Casino Odyssey in Cyberspace. And I don’t want to give away the ending of this, but it tells you what the last remaining human currency is, and I agree with that.
Lex Fridman
(02:33:21)
We’ll leave that as the cliffhanger. So no more programmers left, huh? That’s where we’re going.
George Hotz
(02:33:29)
Well, unless you want handmade code, maybe they’ll sell it on Etsy. This is handwritten code. It doesn’t have that machine polished to it. It has those slight imperfections that would only be written by a person.
Lex Fridman
(02:33:41)
I wonder how far away we are from that. I mean, there’s some aspect to… On Instagram, your title is listed as prompt engineer.

Prompt engineering

George Hotz
(02:33:49)
Right? Thank you for noticing. Yeah.
Lex Fridman
(02:33:54)
I don’t know if it’s ironic or non, or sarcastic or non. What do you think of prompt engineering as a scientific and engineering discipline and maybe art form?
George Hotz
(02:34:08)
You know what? I started comma six years ago and I started the Tiny Corp a month ago. So much has changed. I started going through similar comma processes to like starting a company. I’m like, okay, I’m going to get an office in San Diego. I’m going to bring people here. I don’t think so. I think I’m actually going to do remote, right? “George, you’re going to do remote? You hate remote.” Yeah. But I’m not going to do job interviews. The only way you’re going to get a job is if you contribute to the GitHub, right? And then interacting through GitHub, like GitHub being the real project management software for your company. And the thing pretty much just is a GitHub repo is like showing me what the future of… Okay, so a lot of times, I’ll go on Discord or kind of grad Discord. And I’ll throw out some random like, “Hey, can you change, instead of having log an X as LL lops, change it to log to an X2?”

(02:35:06)
It’s pretty small change. You can just change a base formula. That’s the kind of task that I can see in AI being able to do in a few years. In a few years, I could see myself describing that. And then within 30 seconds of pull request, it’s up that does it, and it passes my CI and I merge it, right? So I really started thinking about like what is the future of jobs? How many AIs can I employ at my company? As soon as we get the first tiny box up, I’m going to stand up a 65B LLaMA in the Discord. And it’s like, yeah, here’s the tiny box. He’s just like, he’s chilling with us.
Lex Fridman
(02:35:39)
Basically, like you said with niches, most human jobs will eventually be replaced with prompt engineering.
George Hotz
(02:35:48)
Well, prompt engineering kind of is this, as you move up the stack, there used to be humans actually doing arithmetic by hand. There used to be big farms of people doing pluses and stuff, right? And then you have spreadsheets, right? And then, okay, the spreadsheet can do the plus for me. And then you have macros, and then you have things that basically just are spreadsheets under the hood like accounting software. As we move further up the abstraction, well, what’s at the top of the abstraction stack? Well, prompt engineer.
Lex Fridman
(02:36:22)
Yeah.
George Hotz
(02:36:24)
What is the last thing if you think about humans wanting to keep control? Well, what am I really in the company, but a prompt engineer, right?
Lex Fridman
(02:36:33)
Isn’t there a certain point where the AI will be better at writing prompts?
George Hotz
(02:36:38)
Yeah. But you see the problem with the AI writing prompts, a definition that I always liked of AI was AI is the do what I mean machine. The computer is so pedantic. It does what you say, but you want the do what I mean, machine, right? You want the machine where you say, “Get my grandmother out of the burning house.” It reasonably takes your grandmother and puts her on the ground, not lifts her a thousand feet above the burning house and lets her fall. There’s no Zukowski examples.
Lex Fridman
(02:37:11)
But it’s not going to find the meaning. I mean, to do what I mean, it has to figure stuff out.
George Hotz
(02:37:16)
Sure.
Lex Fridman
(02:37:17)
And the thing you’ll maybe ask it to do is run government for me.
George Hotz
(02:37:23)
Oh, and do what I mean very much comes down to how aligned is that AI with you. Of course, when you talk to an AI that’s made by a big company in the cloud, the AI fundamentally is aligned to them, not to you. And that’s why you have to buy a tiny box. So you make sure the AI stays aligned to you. Every time that they start to pass AI regulation or GPU regulation, I’m going to see sales of tiny boxes spike. It’s going to be like guns. Every time they talk about gun regulation, boom. Gun sales.
Lex Fridman
(02:37:53)
So in the space of AI, you’re an anarchist, anarchism espouser, believer.
George Hotz
(02:37:58)
I’m an informational anarchist. Yes. I’m an informational anarchist and a physical status. I do not think anarchy in the physical world is very good because I exist in the physical world. But I think we can construct this virtual world where anarchy, it can’t hurt you. I love that Tyler, the creator tweet. It was, “Cyber bullying isn’t real, man. Have you tried? Turn it off the screen, close your eyes.”
Lex Fridman
(02:38:22)
Yeah. But how do you prevent the AI from basically replacing all human prompt engineers where nobody’s the prompt engineer anymore? So autonomy, greater and greater autonomy until it’s full autonomy. And that’s just where it’s headed. Because one person’s going to say, “Run everything for me.”
George Hotz
(02:38:49)
You see, I look at potential futures. And as long as the Ais go on to create a vibrant civilization with diversity and complexity across the universe, more power to them, we’ll die. If the AIs go on to actually turn the world into paperclips and then they die out themselves, well that’s horrific. And we don’t want that to happen. So this is what I mean about robustness. I trust robust machines. The current AIs are so not robust. This comes back to the idea that we’ve never made a machine that can self replicate. But if the machines are truly robust and there is one prompt engineer left in the world, hope you’re doing good, man. Hope you believe in God. Go by God and go forth and conquer the universe.

Video games

Lex Fridman
(02:39:42)
Well, you mentioned, because I talked to Mark about faith and God, and you said you were impressed by that. What’s your own belief in God and how does that affect your work?
George Hotz
(02:39:54)
I never really considered, when I was younger, I guess my parents were atheists, so I was raised kind of atheist. And I never really considered how absolutely silly atheism is because I create-
George Hotz
(02:40:03)
… really atheism is, because I create worlds. Every game creator, “How are you an atheist, bro? You create worlds.” “Well, [inaudible 02:40:10] but no one created the art world, man. That’s different. Haven’t you heard about the Big Bang and stuff?” Yeah. What’s the Skyrim myth origin story in Skyrim? I’m sure there’s some part of it in Skyrim, but it’s not like if you ask the creators… The Big Bang is in universe, right? I’m sure they have some Big Bang notion in Skyrim, right? But that obviously is not at all how Skyrim was actually created. It was created by a bunch of programmers in a room. So it struck me one day how just silly atheism is. Of course, we were created by God. It’s the most obvious thing.
Lex Fridman
(02:40:45)
That’s such a nice way to put it. We’re such powerful creators ourselves. It’s silly not to conceive that there’s creators even more powerful than us.
George Hotz
(02:40:54)
Yeah. And then I also like that notion. That notion gives me a lot of… I guess you can talk about what it gives a lot of religious people, it just gives me comfort. It’s like, “You know what? If we mess it all up and we die out, yeah.”
Lex Fridman
(02:41:09)
The same way that a video game has comfort in it.
George Hotz
(02:41:12)
God will try again.
Lex Fridman
(02:41:14)
Or there’s balance. Somebody figured out a balanced view of it, so it all makes sense in the end. A video game is usually not going to have crazy, crazy stuff.
George Hotz
(02:41:27)
People will come up with, ” Well, yeah, but man, who created God?” I’m like, “That’s God’s problem. What are you asking me? If God believes in God?”
Lex Fridman
(02:41:41)
I’m just this NPC living in his game.
George Hotz
(02:41:43)
I mean to be fair, if God didn’t believe in God, he’d be as silly as the atheists here.
Lex Fridman
(02:41:48)
What do you think is the greatest computer game of all time? Do you have any time to play games anymore? Have you played Diablo IV?
George Hotz
(02:41:57)
I have not played Diablo IV.
Lex Fridman
(02:41:59)
I will be doing that shortly. I have to. There’s just so much history with one, two, and three.
George Hotz
(02:42:04)
You know what? I’m going to say? World of Warcraft. And it’s not that the game is such a great game, it’s not. It’s that I remember, in 2005 when it came out, how it opened my mind to ideas. It opened my mind to this is whole world we’ve created. And there’s almost been nothing like it since. You can look at MMOs today, and I think they all have lower user bases than World of Warcraft. EVE Online’s kind of cool. But to think that everyone know … people are always like, “Look at the Apple headset.” What do people want in this VR? Everyone knows what they want. I want Ready Player One, and that…

(02:42:51)
So I’m going to say World of Warcraft, and I’m hoping that games can get out of this whole mobile gaming dopamine pump thing, and-
Lex Fridman
(02:43:00)
Create worlds.
George Hotz
(02:43:00)
Create worlds, yeah.
Lex Fridman
(02:43:03)
Worlds that captivate a very large fraction of the human population.
George Hotz
(02:43:07)
Yeah. And I think it’ll come back, I believe.
Lex Fridman
(02:43:09)
But MMO really, really pull you in.
George Hotz
(02:43:13)
Games do a good job. I mean okay, other two other games that I think are very noteworthy for me are Skyrim and GTA 5.
Lex Fridman
(02:43:19)
Skyrim, yeah. That’s probably number one for me. GTA… Hey, what is it about GTA? I guess GTA is real life. I know there’s prostitutes and guns and stuff.
George Hotz
(02:43:35)
Hey, they exist in real life too.
Lex Fridman
(02:43:37)
Yes, I know. But it’s how I imagine your life to be, actually.
George Hotz
(02:43:42)
I wish it was that cool.
Lex Fridman
(02:43:45)
Yeah. I guess because there’s Sims, right? Which is also a game I like, but it’s a gamified version of life. I would love a combination of Sims and GTA. So more freedom, more violence, more rawness, but with also ability to have a career and family and this kind of stuff.
George Hotz
(02:44:05)
What I’m really excited about in games is, once we start getting intelligent AIs to interact with. The NPCs in games have never been.
Lex Fridman
(02:44:15)
But conversationally, in every way.
George Hotz
(02:44:19)
Yeah, in every way. When you are actually building a world and a world imbued with intelligence.
Lex Fridman
(02:44:26)
Oh, yeah.
George Hotz
(02:44:27)
And it’s just hard. You know running World of Warcraft, you’re limited. You’re running on a penny and four. How much intelligence can you run? How many flops did you have? But now when I’m running a game on a hundred beta flop machine, what’s five people? I’m trying to make this a thing. 20 paid a flops of compute is one person of compute. I’m trying to make that a unit.
Lex Fridman
(02:44:47)
20 [inaudible 02:44:49] flops is one person.
George Hotz
(02:44:50)
One Person.
Lex Fridman
(02:44:51)
One person flop.
George Hotz
(02:44:52)
It’s like a horsepower. But what’s a horsepower? It’s how powerful a horse is. What’s a person of compute? Well, now you know-
Lex Fridman
(02:44:58)
[inaudible 02:44:58] flop. I got it. That’s interesting. VR also adds a… I mean in terms of creating worlds.
George Hotz
(02:45:07)
What a Quest 2. I put it on and I can’t believe, the first thing they show me is a bunch of scrolling clouds and a Facebook login screen. You had the ability to bring me into a world, and did you give me? A popup. Right. And this is why you’re not cool, Mark Zuckerberg. You could be cool. Just make sure on the Quest 3, you don’t put me into clouds in a Facebook login screen. Bring me to a world.
Lex Fridman
(02:45:32)
I just tried Quest 3. It was awesome. But hear that guys? I agree with that, so-
George Hotz
(02:45:36)
Wish it didn’t have this clouds in the… It was just so-
Lex Fridman
(02:45:37)
You know what? I mean, in the beginning, what is it, Todd Howard said this about design of the beginning of the games he creates is as like, “The beginning is so, so important.” I recently played Zelda for the first time, Zelda: Breath of the Wild, the previous one. And it’s very quickly; within 10 seconds, you come out of a cave type place and this world opens up. It’s like, “Hah.” And it pulls you in. You forget. Whatever troubles I was having, whatever…
George Hotz
(02:46:13)
I got to play that from the beginning. I played it for an hour at a friend’s house.
Lex Fridman
(02:46:16)
No, the beginning. They got it. They did it really well. The expansiveness of that space, the peacefulness of that place, they got this… the music mean. So much of that is creating that world and pulling you right in.
George Hotz
(02:46:29)
I’m going to go buy a Switch. I’m going to go today and buy a Switch.
Lex Fridman
(02:46:32)
You should. Well, the new one came out. I haven’t played that yet, but Diablo IV or something… I mean, there’s sentimentality also, but something about VR, really is incredible. But the new Quest 3 is mixed reality, and I got a chance to try that. So it’s augmented reality. And for video games, it’s done really, really well-
George Hotz
(02:46:53)
Is it passthrough or cameras?
Lex Fridman
(02:46:55)
Cameras.
George Hotz
(02:46:55)
It’s cameras. Okay.
Lex Fridman
(02:46:55)
Yeah.
George Hotz
(02:46:56)
The Apple one, is that one passthrough or cameras?
Lex Fridman
(02:46:58)
I don’t know. I don’t know how real it is. I don’t know anything
George Hotz
(02:47:01)
It’s coming out in January.
Lex Fridman
(02:47:05)
Is it January? Or is it some point?
George Hotz
(02:47:06)
Some point. Maybe not January. Maybe that’s my optimism. But Apple, I will buy it. I don’t care if it’s expensive and does nothing, I will buy it. I’ll support this future endeavor.
Lex Fridman
(02:47:14)
You’re the meme. Oh, yes. I support competition. It seemed like Quest was the only people doing it. And this is great that they’re like…
George Hotz
(02:47:25)
You know what? And this is another place we’ll give some more respect to Mark Zuckerberg. The two companies that have endured through technology are Apple and Microsoft. And what do they make? Computers and business services, right. All the meme, social ads, they all come and go. But you want to endure, build hardware.
Lex Fridman
(02:47:45)
Yeah. That’s a really interesting job. Maybe I’m new with this, but it’s a $500 headset, Quest 3. And just having creatures run around the space, our space right here, to me, okay, this is very boomer statement, but it added windows to the place.
George Hotz
(02:48:09)
Oh, I heard about the aquarium. Yeah.
Lex Fridman
(02:48:10)
Yeah, aquarium. But in this case, it was a zombie game, whatever, it doesn’t matter. But it modifies the space in a way where I can’t… it really feels like a window and you can look out. It’s pretty cool. It is like a zombie game. They’re running at me, whatever. But what I was enjoying is the fact that there’s a window and they’re stepping on objects in this space, that was a different kind of escape. Also, because you can see the other humans. So it’s integrated with the other humans. It’s really interesting-
George Hotz
(02:48:42)
And that’s why it’s more important than ever, that the AI is running on those systems are aligned with you. They’re going to augment your entire world.
Lex Fridman
(02:48:48)
Oh yeah. And those AIs have a… I mean, you think about all the dark stuff like sexual stuff. If those AIs threaten me, that could be haunting. If they threaten me in a non-video game way, it’s like…
George Hotz
(02:49:07)
Yeah, yeah, yeah, yeah.
Lex Fridman
(02:49:09)
They’ll know personal information about me. And then you lose track of what’s real, what’s not, what if stuff is hacked?
George Hotz
(02:49:15)
There’s two directions the AI girlfriend company can take, right. There’s the highbrow, something like her, maybe something you kind of talk to. And this is, and then there’s the lowbrow version of it, where I want to set up a brothel in Times Square.
Lex Fridman
(02:49:26)
Yeah.
George Hotz
(02:49:27)
Yeah. It’s not cheating if it’s a robot, it’s a VR experience.
Lex Fridman
(02:49:30)
Is there an in between?
George Hotz
(02:49:32)
No. I don’t want to do that one or that one.
Lex Fridman
(02:49:35)
Have you decided yet?
George Hotz
(02:49:36)
No. I’ll figure it out. We’ll see where the technology goes.
Lex Fridman
(02:49:39)
I would love to hear your opinions for George’s third company. What to do, the brothel on Times Square or The Hurt Experience? What do you think company number four will be? You think there’ll be a company number four?
George Hotz
(02:49:54)
There’s a lot to do in company number two. I’m talking about company number three now. None of that tech exists yet. There’s a lot to do in company number two. Company number two is going to be the great struggle of the next six years. And if the next six years, how centralized is compute going to be. The less centralized compute is going to be, the better of a chance we all have.
Lex Fridman
(02:50:12)
So you’re like a flag bearer for open source distributed cent decentralization of compute?
George Hotz
(02:50:19)
We have to. We have to, or they will just completely dominate us. I showed a picture on stream, of a man, in a chicken farm. You ever seen one of those factory farm, chicken farms? Why does he dominate all the chickens? Why does he-
Lex Fridman
(02:50:33)
Smarter.
George Hotz
(02:50:33)
He’s smarter, right. Some people on Twitch were like, “He’s bigger than the chickens.” Yeah. And now here’s a man in a cow farm, right. So it has nothing to do with their size and everything to do with their intelligence. And if one central organization has all the intelligence, you’ll be the chickens and they’ll be the chicken man. But if we all have the intelligence, we’re all the chickens. We’re not all the men, we’re all the chickens. There’s no chicken man.
Lex Fridman
(02:51:01)
There’s no chicken man. We’re just chickens in Miami.
George Hotz
(02:51:05)
He was having a good life, man.
Lex Fridman
(02:51:07)
Yeah, I’m sure he was. I’m sure he was. What have you learned from launching a running Comma AI in Tiny Corp? Starting a company from an idea and scaling it. And by the way, I’m all in on Tiny Box, so I’m your… I guess it’s pre-order only now.
George Hotz
(02:51:24)
I want to make sure it’s good. I want to make sure that the thing that I deliver is not going to be a Quest 2, which you buy and use twice. I mean, it’s better than a Quest which you bought and used less than once. Statistically.
Lex Fridman
(02:51:36)
Well, if there’s a beta program for Tiny Box, I’m into-
George Hotz
(02:51:40)
Sounds good.
Lex Fridman
(02:51:40)
So I won’t be the whiny… Yeah, I’ll be the tech-savvy user of the Tiny Box, just to be in the early days-
George Hotz
(02:51:49)
What have I learned?
Lex Fridman
(02:51:50)
What have you learned from building these companies?
George Hotz
(02:51:54)
The longest time at Comma, I asked, “Why? Why did I start a company? Why did I do this?” But what else was I going to do?
Lex Fridman
(02:52:11)
So you like bringing ideas to life?
George Hotz
(02:52:15)
With Comma, it really started as an ego battle with Elon. I wanted to beat him. I saw a worthy adversary. Here’s a worthy adversary who I can beat at self-driving cars. And I think we’ve kept pace, and I think he’s kept ahead. I think that’s what’s ended up happening there. But I do think Comma is… I mean, Comma’s profitable. And when this drive GPT stuff starts working, that’s it. There’s no more bugs in a loss function. Right now, we’re using a hand coated simulator. There’s no more bugs. This is going to be it. This is their run-up to driving.
Lex Fridman
(02:52:48)
I hear a lot of props for openpilot for Comma.
George Hotz
(02:52:54)
It’s better than FSD and autopilot in certain ways. It has a lot more to do with which field you like. We lowered the price on the hardware to 1499. You know how hard it is to ship reliable consumer electronics that go on your windshield? We’re doing more than most cell phone companies.
Lex Fridman
(02:53:11)
How’d you pull that off, by the way? Shipping a product that goes in a car?
George Hotz
(02:53:14)
I know. I have an SMT line. I make all the boards, in-house, in San Diego.
Lex Fridman
(02:53:21)
Quality control-
George Hotz
(02:53:22)
I care immensely about it. Actually our-
Lex Fridman
(02:53:24)
You’re basically a mom and pap shop with great testing.
George Hotz
(02:53:29)
Our head of openpilot is great at, “Okay, I want all the Comma 3s to be identical.” Yeah, I mean… Look, it’s 1499, 30-day money back, guaranteed. It will blow your mind at what it can do.
Lex Fridman
(02:53:45)
Is it hard to scale?
George Hotz
(02:53:48)
You know what? There’s kind of downsides to scaling it. People are always like, “Why don’t you advertise?” Our mission is to solve self-driving cars while the deliver shippable intermediaries. Our mission has nothing to do with selling a million boxes. It’s [inaudible 02:54:00].
Lex Fridman
(02:54:01)
Do you think it’s possible that Comma gets sold?
George Hotz
(02:54:05)
Only if I felt someone could accelerate that mission and wanted to keep it open source. And not just wanted to. I don’t believe what anyone says. I believe incentives. If a company wanted to buy Comma with their incentives, were to keep it open source. But comma doesn’t stop at the cars. The cars are just the beginning. The device is a human head. The device has two eyes, two ears, it breathes air, it has a mouth.
Lex Fridman
(02:54:30)
So you think this goes to embodied robotics?
George Hotz
(02:54:33)
Well sell Common bodies too. They’re very rudimentary. But one of the problems that we are running into, is that the Comma 3 has about as much intelligence as a bee. If you want a human’s worth of intelligence, you’re going to need a tiny rack, not even a tiny box, you’re going to need a tiny rack, maybe even more.
Lex Fridman
(02:54:56)
How do you put legs on that?
George Hotz
(02:54:58)
You don’t. And there’s no way you can. You connect to it wirelessly. So you put your tiny box or your tiny rack in your house, and then you get your Comma body and your Comma body runs the models on that. It’s close. You don’t have to go to some cloud, which is 30 milliseconds away. You go to a thing which is 0.1 milliseconds away.
Lex Fridman
(02:55:18)
So the AI girlfriend will have a central hub in the home?
George Hotz
(02:55:23)
I mean, eventually. If you fast-forward 20, 30 years, the mobile chips will get good enough to run these Ais. But fundamentally, it’s not even a question of putting legs on a tiny box, because how are you getting 1.5 kilowatts of power on that thing? Right? So they’re very synergistic businesses. I also want to build all of Comma’s training computers. Right. Comma builds training computers. Right now we use commodity parts. I think I can do it cheaper. So we’re going to build. Tiny Corp is going to not just sell tiny boxes. Tiny boxes is the consumer version. But I’ll build training data centers too.

Andrej Karpathy

Lex Fridman
(02:55:57)
Have you talked to Andre Kaparthy or have you talked to Elon about Tiny Corp?
George Hotz
(02:56:01)
He went to work at OpenAI.
Lex Fridman
(02:56:03)
What do you love about Andre Kaparthy? To me, he’s one of the truly special humans we got.
George Hotz
(02:56:09)
Oh man. His streams are just a level of quality so far beyond mine. I can’t help myself. It’s just…
Lex Fridman
(02:56:19)
Yeah, he’s good.
George Hotz
(02:56:20)
He wants to teach you. I want to show you that I’m smarter than you.
Lex Fridman
(02:56:26)
Yeah, he has no… I mean, thank you for the sort of the raw, authentic honesty. Yeah. I mean, a lot of us have that. I think Andre is as legit as it gets in that he just wants to teach you. And there’s a curiosity that just drives him. At the stage where he is in life, to be still one of the best tinkerers in the world. It’s crazy, to, what is it? Micrograd?
George Hotz
(02:56:54)
Micrograd was… Yeah, inspiration for tinygrad. The whole… I mean, his CS231n was… this was the inspiration. This is what I just took and ran with and ended up writing this, so..
Lex Fridman
(02:57:06)
But I mean, to me that-
George Hotz
(02:57:08)
Don’t go work for Darth Vader, man.
Lex Fridman
(02:57:10)
I mean, the flip side, to me, is the fact that he’s going there, is a good sign for OpenAI. I think I like [inaudible 02:57:21] discover a lot. Those guys are really good at what they do.
George Hotz
(02:57:25)
I know they are. And that’s what’s even more… And you know what? It’s not that OpenAI doesn’t open source the weights of GPT-4. It’s that they go in front of Congress. And that is what upsets me. We had two effective altruists [inaudible 02:57:41] go in front of Congress. One’s in jail.
Lex Fridman
(02:57:45)
I think you’re drawing parallels on there.
George Hotz
(02:57:47)
One’s in jail.
Lex Fridman
(02:57:49)
You gave me a look. You gave me a look.
George Hotz
(02:57:51)
No, I think a factor of altruism is a terribly evil ideology and yeah.
Lex Fridman
(02:57:55)
Oh yeah. That’s interesting. Why do you think that is? Why you think there’s something about a thing that sounds pretty good, that kind of gets us into trouble?
George Hotz
(02:58:04)
Because you get [inaudible 02:58:06] freed. [inaudible 02:58:07] freed is the embodiment of effective altruism. Utilitarianism is an abhorrent ideology. Well, yeah, we’re going to kill those three people to save a thousand, of course, right. There’s no underlying, there’s just.. Yeah.
Lex Fridman
(02:58:23)
Yeah. But to me that’s a bit surprising. But it’s also, in retrospect, not that surprising. But I haven’t heard really clear kind of rigorous analysis why effective altruism is flawed.
George Hotz
(02:58:40)
Oh well, I think charity is bad, right. So what is charity but investment that you don’t expect to have a return on? Right.
Lex Fridman
(02:58:48)
But you can also think of charity as you would like to see… So allocate resources in optimal way to make a better world.
George Hotz
(02:59:00)
And probably almost always, that involves starting a company, right, because-
Lex Fridman
(02:59:04)
More efficient,-
George Hotz
(02:59:05)
If you just take the money and you spend it on malaria nets, okay, great. You’ve made a hundred malaria nets. But if you teach-
Lex Fridman
(02:59:13)
A man, how to fish.
George Hotz
(02:59:14)
Right?
Lex Fridman
(02:59:15)
Yeah. No, but the problem is teaching amount how to fish might be harder. Starting a company might be harder than allocating money that you already have.
George Hotz
(02:59:22)
I like the flip side of effective altruism; effective accelerationism. I think accelerationism is the only thing that’s ever lifted people out of poverty. The fact that food is cheap. Not, “We’re giving food away because we are kindhearted people.” No, food is cheap. And that’s the world you want to live in. UBI, what a scary idea. What a scary idea. All your power now? If money is power, your only source of power is granted to you by the goodwill of the government. What a scary idea.
Lex Fridman
(02:59:54)
So you even think long term, even-
George Hotz
(02:59:57)
I’d rather die than need UBI to survive. And I mean it.
Lex Fridman
(03:00:04)
What if survival is basically guaranteed? What if our life becomes so good?
George Hotz
(03:00:08)
You can make survival guaranteed without UBI. What you have to do, is make housing and food dirt cheap. Right? And that’s the good world. And actually, let’s go into what we should really be making dirt cheap, which is energy. Right. That energy that… Oh my God, that’s…

(03:00:27)
I’m pretty centrist politically. If there’s one political position I cannot stand, it’s deceleration. It’s people who believe we should use less energy. Not people who believe global warming is a problem, I agree with you. Not people who believe that the saving the environment is good, I agree with you. But people who think we should use less energy, that energy usage is a moral bad. No, no. You are asking, you are diminishing humanity.
Lex Fridman
(03:00:54)
Yeah. Energy is flourishing. Creative flourishing of the human species.
George Hotz
(03:00:59)
How do we make more of it? How do we make it clean? And how do we make… How I pay 20 cents for a megawatt hour instead of a kilowatt hour?
Lex Fridman
(03:01:08)
Part of me wishes that Elon went into nuclear fusion versus Twitter, part of me. Or somebody like Elon.
George Hotz
(03:01:20)
I wish there were more Elons in the world. And I think Elon sees it as this is a political battle that needed to be fought. And again, I always ask the question of whenever I disagree with him, I remind myself that he is a billionaire and I’m not. So maybe he’s got something figured out that I don’t, or maybe he doesn’t
Lex Fridman
(03:01:38)
To have some humility. But at the same time, me as a person who happens to know him, I find myself in that same position. And sometimes even billionaires need friends who disagree and help them grow. And that’s a difficult reality.
George Hotz
(03:01:57)
And it must be so hard. It must be so hard to meet people once you get to that point where-
Lex Fridman
(03:02:02)
Fame, power, money, everybody’s sucking up to you.
George Hotz
(03:02:05)
See, I love not having shit. I don’t have shit man. Trust me. There’s nothing I can give you. There’s nothing worth taking from me.
Lex Fridman
(03:02:12)
Yeah. It takes a really special human being, when you have power, when you have fame, when you have money, to still think from first principles. Not all the adoration you get towards you, all the admiration, all the people saying, “Yes, yes, yes.”
George Hotz
(03:02:26)
And all the hate too.
Lex Fridman
(03:02:29)
And the hate-
George Hotz
(03:02:29)
I think that’s worse.
Lex Fridman
(03:02:30)
So the hate makes you want to go to the ‘yes’ people because the hate exhausts you. And the kind of hate that Elon’s gotten from the left, is pretty intense. And so that, of course, drives him right, and loses balance, and-
George Hotz
(03:02:46)
And it keeps this absolutely fake siop political divide alive, so that the 1% can keep power.
Lex Fridman
(03:02:56)
I wish we would be less divided because it is giving powr-
George Hotz
(03:02:59)
It gives power-
Lex Fridman
(03:02:59)
To the ultra powerful.
George Hotz
(03:03:01)
I know.
Lex Fridman
(03:03:02)
The rich get richer. You have love in your life. Has love made you a better or a worse programmer? Do you keep productivity metrics?
George Hotz
(03:03:13)
No, no, no. I’m not that methodical. I think there comes to a point where, if it’s no longer visceral, I just can’t enjoy it. I guess still, viscerally, love programming. The minute I started-
Lex Fridman
(03:03:29)
So that’s one of the big loves of your life, is programming?
George Hotz
(03:03:33)
I mean, just my computer in general. I mean, I tell my girlfriend, “My first love is my computer,” of course. I sleep with my computer. It’s there for a lot of my sexual experiences. Come on. So is everyone’s right. You got to be real about that. And-
Lex Fridman
(03:03:48)
Not just the ID for programming, just the entirety of the computational machine?
George Hotz
(03:03:53)
The fact that… Yeah. I wish it was a.. And someday they’ll be smarter, and someday [inaudible 03:03:59]. Maybe I’m weird for this, but I don’t discriminate, man. I’m not going to discriminate BioStack life in Silicon Stack life.
Lex Fridman
(03:04:04)
So the moment the computer starts to say, “I miss you,” and starts to have some of the basics of human intimacy, it’s over for you. The moment VS Code says, “Hey, George…”
George Hotz
(03:04:16)
No, no, no, but VS Code is… No, Microsoft’s doing that to try to get me hooked on it. I’ll see through it. I’ll see through it. It’s gold digger, man. It’s gold digger.
Lex Fridman
(03:04:26)
Well, it can be an open source thing.
George Hotz
(03:04:27)
Well, this just gets more interesting, right. If it’s open source, then yeah, it becomes-
Lex Fridman
(03:04:31)
Though, Microsoft’s done a pretty good job on that.
George Hotz
(03:04:33)
Oh, absolutely. No, no, no. Look, I think Microsoft… Again, I wouldn’t count on it to be true forever, but I think right now, Microsoft is doing the best work in the programming world, between GitHub, GitHub Actions VS Code, the improvements to Python, it was Microsoft.This is-
Lex Fridman
(03:04:51)
Who would’ve thought, Microsoft and Mark Zuckerberg are spearheading the open source movement.
George Hotz
(03:04:57)
Right? Right? How things change.
Lex Fridman
(03:05:01)
Oh, it’s beautiful.
George Hotz
(03:05:03)
And by the way, that’s who I bet on to replace Google, by the way.
Lex Fridman
(03:05:06)
Who?
George Hotz
(03:05:06)
Microsoft.
Lex Fridman
(03:05:07)
Microsoft.
George Hotz
(03:05:08)
I think Satya Nadella said straight up, “I’m coming for it.”
Lex Fridman
(03:05:11)
Interesting. So your bet, who wins AGI? That’s [inaudible 03:05:16]-
George Hotz
(03:05:15)
I don’t know about AGI. I think we’re a long way away from that. But I would not be surprised, if in the next five years, Bing overtakes Google as a search engine.
Lex Fridman
(03:05:24)
Interesting.
George Hotz
(03:05:25)
Wouldn’t surprise me.
Lex Fridman
(03:05:26)
Interesting. I hope some startup does.
George Hotz
(03:05:33)
It might be some startup too. I would equally bet on some startup.
Lex Fridman
(03:05:37)
Yeah. I’m like 50 50. But maybe that’s naive. I believe in the power of these language models.
George Hotz
(03:05:43)
Satya is alive. Microsoft’s alive.
Lex Fridman
(03:05:45)
Yeah, it’s great. It’s great. I like all the innovation in these companies. They’re not being stale, and to the degree they’re being stale, they’re losing. So there’s a huge incentive to do a lot of exciting work and open source work, this is incredible.
George Hotz
(03:06:01)
Only way to win.

Meaning of life

Lex Fridman
(03:06:02)
You’re older, you’re wiser. What’s the meaning of life, George Hotz?
George Hotz
(03:06:08)
To win.
Lex Fridman
(03:06:09)
It’s still to win?
George Hotz
(03:06:10)
Of course.
Lex Fridman
(03:06:12)
Always?
George Hotz
(03:06:13)
Of course.
Lex Fridman
(03:06:14)
What’s winning look like for you?
George Hotz
(03:06:17)
I don’t know. I haven’t figured out what the game is yet, but when I do, I want to win-
Lex Fridman
(03:06:19)
So it’s bigger than solving self-driving? It’s bigger than democratizing, decentralizing and compute?
George Hotz
(03:06:29)
I think the game is to stand eye to eye with God.
Lex Fridman
(03:06:33)
I wonder what that means for you. At the end of your life, what that would look like.
George Hotz
(03:06:41)
I mean, this is what… I don’t know. There’s probably some ego trip of mine. “You want to stand eye to eye with God. You’re just blasphemous, man.” Okay. I don’t know. I don’t know. I don’t know. I don’t know if it would upset God. I think he wants that. I mean, I certainly want that from my creations. I want my creations to stand eye to eye with me. So why wouldn’t God want me to stand eye to eye with him? That’s the best I can do, golden rule.
Lex Fridman
(03:07:11)
I’m just imagining the creator of a video game, having to look, stand eye to eye, with one of the characters.
George Hotz
(03:07:22)
I only watched season one of Westworld. But yeah, we got to find the maze and solve it.
Lex Fridman
(03:07:27)
Yeah. I wonder what that looks like. It feels like a really special time in human history, where that’s actually possible. There’s something about AI that’s… we’re playing with something weird here. Something really weird.
George Hotz
(03:07:41)
I wrote a blog post, “I reread Genesis and just looked like… they give you some clues at the end of Genesis for finding the Garden of Eden. And I’m interested. I’m interested.”
Lex Fridman
(03:07:54)
Well, I hope you find just that, George, you’re one of my favorite people. Thank you for doing everything you’re doing and in this case, for fighting for open source or for decentralization of AI. It’s a fight worth fighting, fight worth winning, hashtag. I love you, brother. These conversations are always great. Hope to talk to you many more times. Good luck with Tiny Corp.
George Hotz
(03:08:15)
Thank you. Great to be here.
Lex Fridman
(03:08:17)
Thanks for listening to this conversation with George Hotz. To support this podcast, please check out our sponsors in the description. And now, let me leave you with some words from Albert Einstein, “Everything should be made as simple as possible, but not simpler.” Thank you for listening and hope to see you next time.

Transcript for Jimmy Wales: Wikipedia | Lex Fridman Podcast #385

This is a transcript of Lex Fridman Podcast #385 with Jimmy Wales.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Introduction

Jimmy Wales
(00:00:00)
We’ve never bowed down to government pressure anywhere in the world, and we never will. We understand that we’re hardcore, and actually, there is a bit of nuance about how different companies respond to this, but our response has always been just to say no. If they threaten to block, well, knock yourself out. You’re going to lose Wikipedia.
Lex Fridman
(00:00:21)
The following is a conversation with Jimmy Wales, co-founder of Wikipedia, one of, if not the most impactful websites ever, expanding the collective knowledge, intelligence, and wisdom of human civilization. This is Lex Fridman podcast. To support it, please check out our sponsors in the description. Now, dear friends, here’s Jimmy Wales.

Origin story of Wikipedia

Lex Fridman
(00:00:47)
Let’s start at the beginning. What is the origin story of Wikipedia?
Jimmy Wales
(00:00:51)
The origin story of Wikipedia, well, so I was watching the growth of the free software movement, open-source software, and seeing programmers coming together to collaborate in new ways, sharing code, doing that under free license, which is really interesting because it empowers an ability to work together. That’s really hard to do if the code is still proprietary, because then if I chip in and help, we have to figure out how I’m going to be rewarded and what that is. But the idea that everyone can copy it and it just is part of the commons really empowered a huge wave of creative software production. I realized that that kind of collaboration could extend beyond just software to all kinds of cultural works.

(00:01:38)
The first thing that I thought of was an encyclopedia and thought, “Oh, that seems obvious that an encyclopedia, you can collaborate on it.” There’s a few reasons why. One, we all pretty much know what an encyclopedia entry on say, the Eiffel Tower should be like. You should see a picture, a few pictures, maybe, history, location, something about the architect, et cetera, et cetera. So we have a shared understanding of what it is we’re trying to do, and then we can collaborate and different people can chip in and find sources and so on and so forth. So set up first Nupedia, which was about two years before Wikipedia.

(00:02:18)
With Nupedia, we had this idea that in order to be respected, we had to be even more academic than a traditional encyclopedia because a bunch of volunteers on the internet getting it right out of an encyclopedia, you could be made fun of if it’s just every random person. So we had implemented this seven-stage review process to get anything published, and two things came of that. So one thing, one of the earliest entries that we published after this rigorous process, a few days later, we had to pull it because as soon as it hit the web and the broader community took a look at it, people noticed plagiarism and realized that it wasn’t actually that good, even though it had been reviewed by academics and so on. So we had to pull it. So it’s like, “Oh, okay. Well, so much for a seven-stage review process.”

(00:03:07)
I was frustrated, “Why is this taking so long? Why is it so hard?” So I thought, “Okay.” I saw that Robert Merton had won a Nobel Prize in economics for his work on option pricing theory. When I was in academia, that’s what I worked on was option pricing theory, had a published paper. So I’d worked through all of his academic papers, and I knew his work quite well. I thought, “Oh, I’ll write a short biography of Merton.” When I started to do it, I’d been out of academia, I hadn’t been a grad student for a few years then. I felt this huge intimidation because they were going to take my draft and send it to the most prestigious finance professors that we could find to give me feedback for revisions. It felt like being back in grad school. It’s like this really oppressive, like, you’re going to submit it for a review and you’re going to get critiques.
Lex Fridman
(00:03:59)
A little bit of the bad part of grad school.
Jimmy Wales
(00:04:01)
Yeah, yeah, the bad part of grad school. So I was like, “Oh, this isn’t intellectually fun, this is like the bad part of grad school. It’s intimidating, and there’s a lot of potential embarrassment if I screw something up and so forth.” So that was when I realized, “Okay, look, this is never going to work. This is not something that people are really going to want to do.” So Jeremy Rosenfeld, one of my employees had brought and showed me the Wiki concept in December, and then Larry Sanger brought in the same, said, “What about this Wiki idea?” So in January, we decided to launch Wikipedia, but we weren’t sure. So the original project was called Nupedia. Even though it wasn’t successful, we did have quite a group of academics and really serious people.

(00:04:45)
We were concerned that, “Oh, maybe these academics are going to really hate this idea, and we shouldn’t just convert the project immediately. We should launch this as a side project, the idea of here’s a Wiki where we can start playing around.” But actually, we got more work done in two weeks than we had in almost two years because people were able to just jump on and start doing stuff, and it was actually a very exciting time. Back then, you could be the first person who typed Africa as a continent and hit Save, which isn’t much of an encyclopedia entry, but it’s true, and it’s a start and it’s kind of fun, like put your name down.

(00:05:20)
Actually, a funny story was several years later, I just happened to be online and I saw when, I think his name is Robert Aumann, won the Nobel Prize in economics. We didn’t have an entry on him at all, which was surprising, but it wasn’t that surprising. This was still early days. So I got to be the first person to type Robert Aumann, won Nobel Prize in economics and hit Save, which again, wasn’t a very good article. But then I came back two days later and people had improved it and so forth. So that second half of the experience where with Robert Merton, I never succeeded because it was just too intimidating. It was like, “Oh, no, I was able to chip in and help, other people jumped in. Everybody was interested in the topic, because it’s all in the news at the moment.” So it’s just a completely different model, which worked much, much better.
Lex Fridman
(00:06:03)
Well, what is it that made that so accessible, so fun, so natural to just add something?
Jimmy Wales
(00:06:09)
Well, I think, especially in the early days, and this, by the way, has gotten much harder because there are fewer topics that are just greenfield available. But you could say, “Oh, well, I know a little bit about this, and I can get it started.” But then it is fun to come back then and see other people have added and improved and so on and so forth. That idea of collaborating where people can, much like open-source software, you put your code out and then people suggest revisions. They change it, and it modifies and it grows beyond the original creator, it’s just a fun, wonderful, quite geeky hobby, but people enjoy it.

Design of Wikipedia

Lex Fridman
(00:06:51)
How much debate was there over the interface, over the details of how to make that, seamless and frictionless?
Jimmy Wales
(00:06:57)
Yeah, not as much as there probably should have been, in a way. During that two years of the failure of Nupedia where very little work got done, what was actually productive was, there was a huge long discussion; email discussion, very clever people talking about things like neutrality, talking about what is an encyclopedia, but also talking about more technical ideas. Back then, XML was all the rage and thinking about shouldn’t you have certain data that might be in multiple articles that gets updated automatically? So for example, the population of New York City, every 10 years there’s a new official census, couldn’t you just update that bit of data in one place and it would update across all languages? That is a reality today. But back then it was just like, “Hmm, how do we do that? How do we think about that?”
Lex Fridman
(00:07:47)
So that is a reality today where it’s-
Jimmy Wales
(00:07:48)
Yeah-
Lex Fridman
(00:07:49)
… there’s some-
Jimmy Wales
(00:07:50)
Yeah, so Wikidata-
Lex Fridman
(00:07:50)
… universal variables? Wikidata.
Jimmy Wales
(00:07:56)
Yeah, Wikidata. From a Wikipedia entry, you can link to that piece of data in Wikidata, and it’s a pretty advanced thing, but there are advanced users who are doing that. Then when that gets updated, it updates in all the languages where you’ve done that.
Lex Fridman
(00:08:07)
That’s really interesting. There was this chain of emails in the early days of discussing the details of what is. So there’s the interface, there’s the-
Jimmy Wales
(00:08:14)
Yeah, so the interface, so an example, there was some software called UseModWiki, which we started with. It’s quite amusing actually, because the main reason we launched with UseModWiki is that it was a single Perl script, so it was really easy for me to install it on the server and just get running. But it was some guy’s hobby project, it was cool, but it was just a hobby project. All the data was stored in flat text files, so there was no real database behind it. So to search the site, you basically used Grab, which is just the basic Unix utility to look through all the files. So that clearly was never going to scale. But also in the early days, it didn’t have real logins. So you could set your username, but there were no passwords. So I might say Bob Smith, and then someone else comes along and says, “No, I’m Bob Smith,” and they both had it. Now that never really happened.

(00:09:10)
We didn’t have a problem with it, but it was obvious, you can’t grow a big website where everybody can pretend to be everybody. That’s not going to be good for trust and reputation and so forth. So quickly, I had to write a little login, store people’s passwords and things like that so you could have unique identities. Then another example of something you would’ve never thought would’ve been a good idea, and it turned out to not be a problem. But to make a link in Wikipedia in the early days, you would make a link to a page that may or may not exist by just using CamelCase, meaning it’s like upper case, lowercase, and you smash the words together. So maybe New York City, you might type N-E-W, no space, capital Y, York City, and that would make a link, but that was ugly. That was clearly not right. So I was like, “Okay, well that’s just not going to look nice. Let’s just use square brackets, two square brackets makes a link.”

(00:10:04)
That may have been an option in the software. I’m not sure I thought up square brackets. But anyway, we just did that, which worked really well. It makes nice links and you can see in its red links or blue links, depending on if the page exists or not. But the thing that didn’t occur to me even to think about is that, for example, on the German language standard keyboard, there is no square bracket. So for German Wikipedia to succeed, people had to learn to do some alt codes to get the square bracket, or a lot of users cut and paste a square bracket where they could find one and they would just cut and paste one in. Yet. German Wikipedia has been a massive success, so somehow that didn’t slow people down.
Lex Fridman
(00:10:40)
How is that that the German keyboards don’t have a square bracket. How do you do programming? How do you live life to its fullest without square brackets?
Jimmy Wales
(00:10:48)
It’s a very good question. I’m not really sure. Maybe it does now because keyboard standards have drifted over time and becomes useful to have a certain character. It’s same thing, there’s not really a W character in Italian, and it wasn’t on keyboards or I think it is now. But in general, W is not a letter in Italian language, but it appears in enough international words that it’s crept into Italian.
Lex Fridman
(00:11:12)
All of these things are probably Wikipedia articles in themselves.
Jimmy Wales
(00:11:17)
Oh, yes. Oh, yeah.
Lex Fridman
(00:11:17)
The discussion of square brackets-
Jimmy Wales
(00:11:17)
That is a whole-
Lex Fridman
(00:11:17)
… in German-
Jimmy Wales
(00:11:19)
… whole discussion, I’m sure.
Lex Fridman
(00:11:20)
… on both the English and the German Wikipedia. The difference between those two might be very-
Jimmy Wales
(00:11:27)
Interesting.
Lex Fridman
(00:11:27)
… interesting. So Wikidata is fascinating, but even the broader discussion of what is an encyclopedia, can you go to that philosophical question of-
Jimmy Wales
(00:11:37)
Sure.
Lex Fridman
(00:11:37)
… what is an encyclopedia?
Jimmy Wales
(00:11:39)
What is an encyclopedia? So the way I would put it is an encyclopedia, or what our goal is is the sum of all human knowledge, but sum meaning summary. This was an early debate. Somebody started uploading the full text of Hamlet, for example, and we said, “Mmm, wait, hold on a second. That’s not an encyclopedia article, but why not?” So hence was born Wikisource, which is where you put original texts and things like that, out of copyright text, because they said, “No, an encyclopedia article about Hamlet, that’s a perfectly valid thing. But the actual text of the play is not an encyclopedia article. “So most of it’s fairly obvious, but there are some interesting quirks and differences. So for example, as I understand it, in French language encyclopedias, traditionally it would be quite common to have recipes, which in English language that would be unusual. You wouldn’t find a recipe for chocolate cake in Britannica. So I actually don’t know the current state, haven’t thought about that in many, many years now.
Lex Fridman
(00:12:44)
State of cake recipes in Wikipedia, in English, Wikipedia?
Jimmy Wales
(00:12:47)
I wouldn’t say there’s chocolate cake recipes. You might find a sample recipe somewhere. I’m not saying there are none, but in general, no, we wouldn’t have recipes-
Lex Fridman
(00:12:55)
I told myself I would not get outraged in this conversation, but now I’m outraged. I’m deeply upset.
Jimmy Wales
(00:13:00)
It’s actually very complicated. I love to cook. I’m actually quite a good cook. What’s interesting is it’s very hard to have a neutral recipe because [inaudible 00:13:12]
Lex Fridman
(00:13:12)
Like a canonical recipe for cake-
Jimmy Wales
(00:13:13)
A canonical recipe is-
Lex Fridman
(00:13:14)
… chocolate cake.
Jimmy Wales
(00:13:15)
… is kind of difficult to come by because there’s so many variants and it’s all debatable and interesting. For something like chocolate cake, you could probably say, “Here’s one of the earliest recipes,” or, “Here’s one of the most common recipes.” But for many, many things, the variants are as interesting as somebody said to me recently, 10 Spaniards, 12 paella recipes. So these are all matters of open discussion.

Number of articles on Wikipedia

Lex Fridman
(00:13:44)
Well, just to throw some numbers, as of May 27, 2023, there are 6.6 million articles in the English Wikipedia containing over 4.3 billion words. Including articles, the total number of pages is 58 million.
Jimmy Wales
(00:14:05)
Yeah.
Lex Fridman
(00:14:06)
Does that blow your mind?
Jimmy Wales
(00:14:08)
Yes, it does. It doesn’t, because I know those numbers and see them from time to time. But in another sense, a deeper sense, yeah, it does. It’s really remarkable. I remember when English Wikipedia passed 100,000 articles and when German Wikipedia passed 100,000, ’cause I happened to be in Germany with a bunch of Wikipedians that night, and then it seemed quite big. We knew at that time that it was nowhere near complete. I remember at Wikimania in Harvard when we did our annual conference there in Boston, someone who had come to the conference from Poland had brought along with him a small encyclopedia, a single volume encyclopedia of biographies, so short biographies, normally a paragraph or so about famous people in Poland, and there were some 22,000 entries. He pointed out that even then, 2006, Wikipedia felt quite big.

(00:15:12)
He said in English Wikipedia, there’s only a handful of these, less than 10%, I think he said. So then you realized, yeah, actually, who was the mayor of Warsaw in 1873? Don’t know. Probably not in English Wikipedia, but it probably might be today, but there’s so much out there. Of course, what we get into when we’re talking about how many entries there are and how many could there be, is this very deep philosophical issue of notability, which is the question of, well, how do you draw the limit? How do you draw what is there? So sometimes people say, “Oh, there should be no limit.” But I think that doesn’t stand up to much scrutiny if you really pause and think about it. So I see in your hand there you’ve got a BIC pen, pretty standard. Everybody’s seen billions of those in life.
Lex Fridman
(00:16:05)
Classic though.
Jimmy Wales
(00:16:05)
It’s a classic, clear, BIC pen. So could we have an entry about that BIC pen aisle? I bet we do, that type of BIC pen because it’s classic. Everybody knows it, and it’s got a history. Actually, there’s something interesting about the BIC company. They make pens, they also make kayaks, and there’s something else they’re famous for. Basically, they’re a definition by non-essentials company. Anything that’s long and plastic, that’s what they make.
Lex Fridman
(00:16:33)
Wow, that’s very-
Jimmy Wales
(00:16:34)
If you want to find the common ground-
Lex Fridman
(00:16:36)
… platonic form, the platonic form of a BIC.
Jimmy Wales
(00:16:37)
But could we have an article about that very BIC pen in your hand, so Lex Fridman’s BIC pen as of this week?
Lex Fridman
(00:16:45)
Oh, the very, this instance-
Jimmy Wales
(00:16:45)
The very specific instance, and the answer is no, there’s not much known about it. I dare say, unless it’s very special to you and your great-grandmother gave it to you or something, you probably know very little about it. It’s a pen. It’s just here in the office. So that’s just to show there is a limit. In German Wikipedia, they used to talk about the rear nut of the wheel of [inaudible 00:17:10] bicycle [inaudible 00:17:11] a well-known Wikipedian of the time, to sort of illustrate, you can’t have an article about literally everything. So then it raises the question, what can you have an article about? What can’t you? That can vary depending on the subject matter. One of the areas where we try to be much more careful would be biographies. The reason is a biography of a living person, if you get it wrong, you can actually be quite hurtful, quite damaging.

(00:17:38)
So if someone is a private person and somebody tries to create a Wikipedia entry, there’s no way to update it. There’s not much done. So for example, an encyclopedia article about my mother, my mother, school teacher later, a pharmacist, wonderful woman, but never been in the news, other than me talking about why there shouldn’t be a Wikipedia entry, that’s probably made it in somewhere, standard example. But there’s not enough known. You could imagine a database of genealogy having date of birth, date of death, certain elements like that of private people. But you couldn’t really write a biography. One of the areas this comes up quite often is what we call BLP1E. We’ve got lots of acronyms. Biography of a living person who’s notable for only one event is a real danger zone.
Lex Fridman
(00:18:27)
Oh.
Jimmy Wales
(00:18:28)
The type of example would be a victim of a crime, so someone who’s a victim of a famous serial killer, but about whom really not much is known. They weren’t a public person, they’re just a victim of a crime, we really shouldn’t have an article about that person. They’ll be mentioned, of course, and maybe this specific crime might have an article. But for that person, no, not really. That’s not really something that makes any sense because how can you write a biography about someone you don’t know much about? It varies from field to field. So for example, for many academics, we will have an entry that we might not have in a different context because for an academic, it’s important to have sort of their career, what papers they’ve published, things like that.

(00:19:13)
You may not know anything about their personal life, but that’s actually not encyclopedically relevant in the same way that it is for member of a royal family where it’s basically all about the family. So we’re fairly nuanced about notability and where it comes in. I’ve always thought that the term notability, I think, is a little problematic. We struggled about how to talk about it. The problem with notability is it can feel insulting. Say, “Oh no, you’re not noteworthy.” Well, my mother’s noteworthy. She’s a really important person in my life, so that’s not right. But it’s more like verifiability. Is there a way to get information that actually makes an encyclopedia entry?

Wikipedia pages for living persons

Lex Fridman
(00:19:56)
It so happens that there’s a Wikipedia page about me as I’ve learned recently, and the first thought I had when I saw that was, “Surely I am not notable enough.” So I was very surprised and grateful that such a page could exist and actually, just allow me to say thank you to all the incredible people that are part of creating and maintaining Wikipedia. It’s my favorite website on the internet. The collection of articles that Wikipedia has created is just incredible. We’ll talk about the various details of that. But the love and care that goes into creating pages for individuals, for a BIC pen, for all this kind of stuff is just really incredible.

(00:20:43)
So I just felt the love when I saw that page. But I also felt just because I do this podcast and I just through this podcast, gotten to know a few individuals that are quite controversial, I’ve gotten to be on the receiving end of something quite … to me as a person who loves other human beings, I’ve gotten to be at the receiving end of some attacks through Wikipedia. Like you said, when you look at living individuals, it can be quite hurtful, the little details of information. Because I’ve become friends with Elon Musk and I’ve interviewed him, but I’ve also interviewed people on the left, far left, people on the right, some would say far right, and so now you take a step, you put your toe into the cold pool of politics and the shark emerges from the depths and pulls you right in.
Jimmy Wales
(00:21:41)
Yeah, the boiling hot pool of politics.
Lex Fridman
(00:21:43)
I guess it’s hot, and so I got to experience some of that. I think what you also realize is there has to be, for Wikipedia credible sources, verifiable sources, and there’s a dance there because some of the sources are pieces of journalism. Of course, journalism operates under its own complicated incentives such that people can write articles that are not factual or are cherry-picking all the flaws they can have in a journalistic article-
Jimmy Wales
(00:22:18)
For sure.
Lex Fridman
(00:22:18)
… and those can be used as-
Jimmy Wales
(00:22:20)
For sure.
Lex Fridman
(00:22:21)
… as sources. It’s like they dance hand-in-hand. So for me, sadly enough, there was a really concerted attack to say that I was never at MIT, never did anything at MIT. Just to clarify, I am a research scientist at MIT. I have been there since 2015. I’m there today. I’m at a prestigious, amazing laboratory called LIDS, and I hope to be there for a long time. I work on AI, robotics, machine learning. There’s a lot of incredible people there. By the way, MIT has been very kind to defend me. Unlike Wikipedia says, it is not an unpaid position. There was no controversy.
Jimmy Wales
(00:23:03)
Right.
Lex Fridman
(00:23:03)
It was all very calm and happy and almost boring research that I’ve been doing there. The other thing, because I am half-Ukrainian, half-Russian-
Jimmy Wales
(00:23:14)
Oh.
Lex Fridman
(00:23:15)
… and I’ve traveled to Ukraine and I will travel to Ukraine again, and I will travel to Russia for some very difficult conversations. My heart’s been broken by this war. I have family in both places. It’s been a really difficult time. But the little battle about the biography there also starts becoming important for the first time for me. I also want to clarify personally, I use this opportunity of some inaccuracies there. My father was not born in Chkalovsk, Russia. He was born in Kiev, Ukraine. I was born in Chkalovsk which is a town not in Russia. There is a town called that in Russia. But there’s another town in Tajikistan, which is the former republic of the Soviet Union. That town is now called B-U-S-T-O-N, Buston, which is funny because we’re now in Austin, and I also am in Boston, it seems like my whole life is surrounded by these kinds of towns.

(00:24:13)
So I was born in Tajikistan, and the rest of the biography is interesting, but my family is very evenly distributed between their origins and where they grew up between Ukraine and Russia, which adds a whole beautiful complexity to this whole thing. So I want to just correct that. It’s like the fascinating thing about Wikipedia is in some sense, those little details don’t matter. But in another sense, what I felt when I saw a Wikipedia page about me or anybody I know is there’s this beautiful saving that this person existed, like a community that notices you that says, “Huh.” You see a butterfly that floats, and you’re like, “Huh?” That it’s not just any butterfly, it’s that one. “I like that one,” or you see a puppy or something, or it’s this BIC pen. “I remember this one, it has this scratch. You get noticed in that way and I know it’s a beautiful thing. Maybe it’s very silly of me and naive, but I feel like Wikipedia, in terms of individuals, is an opportunity to celebrate people, to celebrate ideas-
Jimmy Wales
(00:25:26)
For sure. For sure.
Lex Fridman
(00:25:26)
… and not a battleground of the kind of stuff we might see on Twitter, like the mockery, the derision, this kind of stuff.
Jimmy Wales
(00:25:35)
For sure.
Lex Fridman
(00:25:36)
Of course, you don’t want to cherry-pick. All of us have flaws and so on, but it just feels like to highlight a controversy of some sort, when that doesn’t at all represent the entirety of the human, in most cases, is sad.
Jimmy Wales
(00:25:50)
Yeah. Yeah. Yeah. So there’s a few to unpack and all that. So first, one of the things I find really, always find very interesting is your status with MIT. Okay, that’s upsetting, and it’s an argument and can be sorted out. But then what’s interesting is you gave as much time to that, which is actually important and relevant to your career and so on to also where your father was born, which most people would hardly notice, but is really meaningful to you. I find that a lot when I talk to people who have a biography in Wikipedia is they’re often as annoyed by a tinier that no one’s going to notice like this town in Tajikistan’s got a new name and so on. Nobody even knows what that means or whatever, but it can be super important. So that’s one of the reasons for biographies, we say human dignity really matters. So some of the things have to do with, and this is a common debate that goes on in Wikipedia, is what we call undue weight. So I’ll give an example.

(00:26:59)
There was a article I stumbled across many years ago about the mayor, or no, he wasn’t a mayor, he was a city council member of, I think it was Peoria, Illinois, but some small town in the Midwest. The entry, he’s been on the city council for 30 years or whatever. He’s frankly, a pretty boring guy and seems like a good local city politician. But in this very short biography, there was a whole paragraph, a long paragraph about his son being arrested for DUI, and it was clearly undue weight. It’s like, “What has this got to do with this guy if it even deserves a mention?” It wasn’t even clear had he done anything hypocritical, had he done himself anything wrong, even was his son, his son got a DUI.

(00:27:44)
That’s never great, but it happens to people, and it doesn’t seem like a massive scandal for your dad. So of course, I just took that out immediately. This is a long, long time ago. That’s the sort of thing where we have to really think about in a biography and about controversies to say, “Is this a real controversy?” So in general, one of the things we tend to say is any section, so if there’s a biography and there’s a section called controversies, that’s actually poor practice because it just invites people to say, “Oh, I want to work on this entry.” It’s either seven sections. “Oh, this one’s quite short. Can I add something?”
Lex Fridman
(00:28:23)
Right?
Jimmy Wales
(00:28:24)
Go out and find some more controversies. Now that’s nonsense, right?
Lex Fridman
(00:28:24)
Yeah.
Jimmy Wales
(00:28:26)
In general, putting it separate from everything else makes it seem worse, and also, it doesn’t put it in the right context. Whereas, if it’s a live flaw and there is a controversy, there’s always potential controversy for anyone, it should just be worked into the overall article, ’cause then it doesn’t become a temptation. You can contextualize appropriately and so forth. So that’s part of the whole process. But I think for me, one of the most important things is what I call community health. So yeah, are we going to get it wrong sometimes? Yeah, of course. We’re humans and doing good, quality reference material is hard. The real question is, how do people react to a criticism or a complaint or a concern? If the reaction is defensiveness or combativeness back, or if someone’s really in there being aggressive and in the wrong, like, “No, no, no, hold on, we’ve got to do this the right way.” You got to say, “Okay, hold on. Are there good sources? Is this contextualized appropriately? Is it even important enough to mention? What does it mean?”

(00:29:40)
Sometimes one of the areas where I do think there is a very complicated flaw, and you’ve alluded to it a little bit, but it’s like we know the media is deeply flawed. We know that journalism can go wrong. I would say particularly in the last whatever, 15 years, we’ve seen a real decimation of local media, local newspapers. We’ve seen a real rise in clickbait headlines and eager focus on anything that might be controversial. We’ve always had that with us, of course, there’s always been tabloid newspapers. But that makes it a little bit more challenging to say, “Okay, how do we sort things out when we have a pretty good sense that not every source is valid?” So as an example, a few years ago, it’s been quite a while now, we deprecated the MailOnline as a source and the MailOnline, the digital arm of the Daily Mail, it’s a tabloid.

(00:30:46)
It’s not fake news, but it does tend to run very hyped-up stories. They really love to attack people and go on the attack for political reasons and so on, and it just isn’t great. So by saying deprecated, and I think some people say, “Oh, you banned the Daily Mail? No, we didn’t ban it as a source. We just said, “Look, it’s probably not a great source. You should probably look for a better source.” So certainly if the Daily Mail runs a headline saying, “New Cure for Cancer,” it’s like probably there’s more serious sources than a tabloid newspaper. So in an article about lung cancer, you probably wouldn’t cite the Daily Mail. That’s kind of ridiculous. But also for celebrities and so forth to know, “Oh, well, they do cover celebrity gossip a lot, but they also tend to have vendettas and so forth.” You really have to step back and go, “Is this really encyclopedic or is this just the Daily Mail going on a rant?”
Lex Fridman
(00:31:39)
Some of that requires a great community health.
Jimmy Wales
(00:31:41)
It requires massive community health.
Lex Fridman
(00:31:43)
Even for me, for stuff I’ve seen that’s kind of, if actually iffy about people I know, things I know about myself, I still feel like a love for knowledge emanating from the article. I feel the community health, so I will take all slight inaccuracies. I love it because that means there’s people, for the most part, I feel of respect and love in this search for knowledge. Sometimes, ’cause I also love Stack Overflow and Stack Exchange for programming-related things. They can get a little cranky sometimes to a degree where it’s like it’s not as … you could could feel the dynamics of the health of the particular community and sub communities too, like a particular C Sharp or Java or Python or whatever, there’s little communities that emerge. You can feel the levels of toxicity, ’cause a little bit of strictness is good, but a little too much is bad because of the defensiveness, ’cause when somebody writes an answer and then somebody else says, “We’ll modify it,” and then get defensive, and there’s this tension that’s not conducive to improving towards a more truthful depiction of that topic.
Jimmy Wales
(00:33:02)
Yeah, a great example-
Lex Fridman
(00:33:00)
… truthful depiction of that topic.
Jimmy Wales
(00:33:02)
Yeah, a great example that I really loved this morning that I saw someone left a note on my user talk page in English Wikipedia saying it was quite a dramatic headline saying racist hook on front page. So we have on the front page of Wikipedia, we have little section called Did You know? And it’s just little tidbits and facts, just things people find interesting. And there’s a whole process for how things get there. And the one that somebody was raising a question about was, it was comparing a very well known US football player, Black. There was a quote from another famous sport person comparing him to a Lamborghini. Clearly a compliment. And so somebody said, “Actually, here’s a study, here’s some interesting information about how Black sports people are far more often compared to inanimate objects. And given that kind of analogy, and I think it’s demeaning to compare a person to a car, et cetera, cetera.”

(00:34:01)
But they said, “I’m not pulling, I’m not deleting it, I’m not removing it. I just want to raise the question.” And then there’s this really interesting conversation that goes on where I think the general consensus was, you know what, this isn’t like the alarming headline racist thing on the front page of Wikipedia, holy moly, that sounds bad. But it’s sort of like, actually yeah this probably isn’t the sort of analogy that we think is great. And so we should probably think about how to improve our language and not compare sports people to inanimate objects and particularly be aware of certain racial sensitivities that there might be around that sort of thing if there is a disparity in the media of how people are called.

(00:34:40)
And I just thought, you know what, nothing for me to weigh in on here. This is a good conversation. Like nobody’s saying people should be banned if they refer to, what was his name, The Fridge, Refrigerator Perry. Very famous comparison to an inanimate object of a Chicago Bears player, many years ago. But they’re just saying, hey, let’s be careful about analogies that we just pick up from the media. I said, “Yeah, that’s good.”
Lex Fridman
(00:35:06)
On the deprecation of news sources is really interesting because I think what you’re saying is ultimately you want to make a article by article decision, use your own judgment. And it’s such a subtle thing because there’s just a lot of hit pieces written about individuals like myself for example, that masquerade as an objective thorough exploration of a human being. It’s fascinating to watch because controversy and hit pieces just get more clicks.
Jimmy Wales
(00:35:41)
Oh yeah, sure.
Lex Fridman
(00:35:41)
This is, I guess, as a Wikipedia contributor, you start to deeply become aware of that and start to have a sense, a radar of clickbait versus truth to pick out the truth from the clickbaity type language.
Jimmy Wales
(00:35:58)
Oh, yeah. I mean it’s really important and we talk a lot about weasel words. And actually I’m sure we’ll end up talking about AI and ChatGPT.
Lex Fridman
(00:36:10)
Yes.
Jimmy Wales
(00:36:10)
But just to quickly mention in this area, I think one of the potentially powerful tool, because it is quite good at this, I’ve played around with and practiced it quite a lot, but ChatGPT-4 is really quite able to take a passage and point out potentially biased terms, to rewrite it to be more neutral. Now it is a bit anodyne and it’s a bit cliched, so sometimes it just takes the spirit out of something that’s actually not bad. It’s just like poetic language and you’re like, okay, that’s not actually helping. But in many cases I think that sort of thing is quite interesting. And I’m also interested in… Can you imagine where you feed in a Wikipedia entry and all the sources and you say, help me find anything in the article that is not accurately reflecting what’s in the sources? And that doesn’t have to be perfect. It only has to be good enough to be useful to community.

(00:37:17)
So if it scans-
Lex Fridman
(00:37:19)
Beautiful.
Jimmy Wales
(00:37:19)
… an article and all the sources and you say, oh, it came back with 10 suggestions and seven of them were decent and three of them it just didn’t understand, well actually that’s probably worth my time to do. And it can help us really more quickly get good people to review obscure entries and things like that.
Lex Fridman
(00:37:41)
So just as a small aside on that, and we’ll probably talk about language models a little bit, or a lot more, but one of the articles, one of the hit pieces about me, the journalist actually was very straightforward and honest about having used GPT to write part of the article.
Jimmy Wales
(00:37:59)
Interesting.
Lex Fridman
(00:37:59)
And then finding that it made an error and apologized for the error, that GPT-4 generated. Which has this kind of interesting loop, which is the articles are used to write Wikipedia pages, GPT is trained on Wikipedia, and there there’s like this interesting loop where the weasel words and the nuances can get lost or can propagate, even though they’re not grounded in reality. Somehow in the generation of the language model, new truths can be created and kind of linger.
Jimmy Wales
(00:38:35)
Yeah, there’s a famous web comic that’s titled cytogenesis, which is about how an errors in Wikipedia and there’s no source for it, but then a lazy journalist reads it and writes the source, and then some helpful Wikipedia spots that it has no source, finds a source and adds it to Wikipedia, and voila, magic. This happened to me once it, well, it nearly happened. There was this, it was really brief. I went back and researched it, I’m like, this is really odd. So Biography Magazine, which is a magazine published by the Biography TV channel, had a pressor profile of me, and it said, “In his spare time,” I’m not quoting exactly, it’s been many years, but, “In his spare time he enjoys playing chess with friends.” I thought, wow, that sounds great. I would like to be that guy. But actually, I play chess with my kids sometimes, but no it’s not a hobby of mine.

(00:39:31)
And I was like, where did they get that? And I contacted the magazine and said, where did that come from? They said, “Oh, it was in Wikipedia.” And I looked in the history, there had been vandalism of Wikipedia, which was not damaging, it’s just false. And it had already been removed. But then I thought, “Oh gosh, well I better mention this to people because otherwise it’s somebody’s going to read that and they’re going to add it, the entry, and is going to take on a life of its own. And then sometimes I wonder if it has, because I’ve been… I was invited a few years ago to do the ceremonial first move in the world chess championship. And I thought, I wonder if they think I’m a really big chess enthusiast because they read this Biography Magazine article.

(00:40:10)
But that problem, when we think about large language models and the ability to quickly generate very plausible but not true content, I think is something that there’s going to be a lot of shakeout and a lot of implications of that.
Lex Fridman
(00:40:25)
What would be hilarious is because of the social pressure of Wikipedia and the momentum, you would actually start playing a lot more chess. Not only the articles are written based on Wikipedia, but your own life trajectory changes because of the Wikipedia, just to make it more convenient. Aspire to.
Jimmy Wales
(00:40:45)
Aspire to, yes. Yeah, aspirational.

ChatGPT

Lex Fridman
(00:40:48)
If we could just talk about that before we jump back to some other interesting topics in Wikipedia. Let’s talk about GPT-4 and large language models. So they are in part trained on Wikipedia content. What are the pros and cons of these language models? What are your thoughts?
Jimmy Wales
(00:41:07)
Yeah, so I mean, there’s a lot of stuff going on. Obviously the technology has moved very quickly in the last six months and looks poised to do so for some time to come. So first things first, part of our philosophy is the open licensing, the free licensing, the idea that this is what we’re here for. We are a volunteer community and we write this encyclopedia. We give it to the world to do what you like with, you can modify it, redistribute it, redistribute modified versions, commercially, non-commercially. This is the licensing. So in that sense, of course it’s completely fine. Now, we do worry a bit about attribution because it is a Creative Commons Attribution Share-Alike License. So attribution is important, not just because of our licensing model and things like that, but it’s just proper attribution is just good intellectual practice.

(00:42:02)
And that’s a really hard complicated question. If I were to write something about my visit here, I might say in a blog post I was in Austin, which is a city in Texas, I’m not going to put a source for Austin as a city in Texas. That’s just general knowledge. I learned it somewhere, I can’t tell you where. So you don’t have to cite and reference every single thing. But if I actually did research and I used something very heavily, it’s just proper, morally proper, to give your sources. So we would like to see that. And obviously they call it grounding. So particularly people at Google are really keen on figuring out grounding.
Lex Fridman
(00:42:48)
It’s such a cool term. So any text that’s generated trying to ground it to the Wikipedia quality-
Jimmy Wales
(00:42:57)
A source.
Lex Fridman
(00:42:57)
… a source. The same kind of standard of what a source means that Wikipedia uses, the same kind of source-
Jimmy Wales
(00:42:57)
The same kind.
Lex Fridman
(00:42:57)
… would be generated but with a graph.
Jimmy Wales
(00:43:05)
The same kind of thing. And of course, one of the biggest flaws in ChatGPT right now is that it just literally will make things up just to be amiable. I think it’s programmed to be very helpful and amiable and it doesn’t really know or care about the truth.
Lex Fridman
(00:43:21)
Can get bullied into… it can be convinced into…
Jimmy Wales
(00:43:25)
Well, but this morning, the story I was telling earlier about comparing a football player to a Lamborghini, and I thought, is that really racial? I don’t know, but I’m mulling it over. And I thought, oh, I’m going to go to ChatGPT. So I sent to ChatGPT-4, I said, “This happened in Wikipedia. Can you think of examples where a white athlete has been compared to a fast car inanimate object?” And it comes back with a very plausible essay where it tells why these analogies are common in sport, blah, blah. I said, “No, no, could you give me some specific examples?” So it gives me three specific examples, very plausible, correct names of athletes and contemporaries and all of that could have been true. Googled every single quote and none of them existed. And so I’m like, “Well, that’s really not good.”

(00:44:14)
I wanted to explore a thought process I was in. First I thought, how do I Google? And it’s like, well, it’s kind of a hard thing to Google because unless somebody’s written about this specific topic, it’s large language model, it’s processed all this data, it can probably piece that together for me, but it just can’t yet. So I think, I hope that ChatGPT 5, 6, 7, three to five years, I’m hoping we’ll see a much higher level of accuracy where when you ask a question like that, I think instead of being quite so eager to please by giving you a plausible sounding answer, it’s just like, I don’t know.
Lex Fridman
(00:44:55)
Or maybe display how much bullshit might be in this generated text. I’m really would like to make you happy right now, but I’m really stretched thin with this generation.
Jimmy Wales
(00:45:07)
Well, it’s one of the things I’ve said for a long time. So in Wikipedia, one of the great things we do may not be great for our reputation, except in a deeper sense for the long term I think it is. But we’ll all be on notice that says the neutrality of this section has been disputed or the following section doesn’t cite in these sources. And I always joke, sometimes I wish the New York Times would run a banner saying the neutrality of this has been disputed. They could give us a… We had a big fight in the newsroom as to whether to run this or not, but we thought it’s important enough to bring it to. But just be aware that not all the journalists are on board with it. Ah, that’s actually interesting, and that’s fine. I would trust them more for that level of transparency. So yeah, similarly ChatGPT should say, yeah, 87% bullshit.
Lex Fridman
(00:45:51)
Well, the neutrality one is really interesting because that’s basically a summary of the discussions that are going on underneath. It would be amazing if… I should be honest, I don’t look at the talk page often. It would be nice somehow if there was a kind of summary in this banner way of like, this, lots of wars have been fought on this here land for this here paragraph.
Jimmy Wales
(00:46:16)
That’s really interesting, I hadn’t thought of that. Because one of the things I do spend a lot of time thinking about these days, and people have found it, we’re moving slowly, but we are moving. Thinking about, okay, these tools exist, are there ways that this stuff can be useful to our community? Because a part of it is we do approach things in a non-commercial way, in a really deep sense. It’s like it’s been great, that Wikipedia has become very popular, but really we’re a community whose hobby is writing an encyclopedia. That’s first, and if it’s popular, great. If it’s not okay, we might have trouble paying for more servers, but it’ll be fine.

(00:46:53)
And so how do we help the community use these tools? One of the ways that these tools can support people, and one example I never thought about, I’m going to start playing with it, is feed in the article and feed in the talk page and say, can you suggest some warnings in the article based on the conversations in the talk page? I think it might-
Lex Fridman
(00:46:53)
That’s brilliant.
Jimmy Wales
(00:47:12)
… be good at that. It might get it wrong sometimes. But again, if it’s reasonably successful at doing that, and you can say, oh, actually, yeah, it does suggest the neutrality of this has been disputed on a section that has a seven-page discussion in the back that might be useful, don’t know, worth playing with.
Lex Fridman
(00:47:30)
Yeah, I mean some more color to the, not neutrality, but also the amount of emotion laden in the exploration of this particular part of the topic. It might actually help you look at more controversial pages, like on a page on the war in Ukraine or a page on Israel and Palestine. There could be parts that everyone agrees on and there’s parts that are just like-
Jimmy Wales
(00:47:58)
Tough.
Lex Fridman
(00:47:59)
… tough.
Jimmy Wales
(00:47:59)
The hard parts.
Lex Fridman
(00:48:00)
It would be nice to, when looking at those beautiful long articles to know, all right, let me just take in some stuff where everybody agrees on.
Jimmy Wales
(00:48:09)
I could give an example that I haven’t looked at in a long time, but I was really pleased with what I saw at the time. So the discussion was that they’re building something in Israel and for their own political reasons, one side calls it a wall hearkening back to Berlin Wall, apartheid, the other calls it a security fence. So we can understand quite quickly if we give it a moment’s thought like, okay, I understand why people would have this grappling over the language. Like, okay, you want to highlight the negative aspects of this and you want to highlight the positive aspects, so you’re going to try and choose a different name. And so there was this really fantastic Wikipedia discussion on the talk page. How do we word that paragraph to talk about the different naming? It’s called this by Israeli, it’s called this by Palestinians. And how you explain that to people could be quite charged. You could easily explain, oh, there’s this difference and it’s because this side’s good and this side’s bad and that’s why there’s a difference. Or you could say, actually, let’s just try and really stay as neutral as we can and try to explain the reasons. So you may come away from it with a concept. Oh, okay, I understand what this debate is about now.
Lex Fridman
(00:49:26)
And just the term Israel- Palestine conflict is still the title of a page in Wikipedia, but the word conflict is something that is a charged word.
Jimmy Wales
(00:49:41)
Of course.
Lex Fridman
(00:49:42)
Because from the Palestinian side or from certain sides, the word conflict doesn’t accurately describe the situation. Because if you see it as a genocide one way, genocide is not a conflict because to people that discuss the challenge, the word conflict, they see conflict is when there’s two equally powerful sides fighting.
Jimmy Wales
(00:50:05)
Sure, yeah, yeah. No, it’s hard. And in a number of cases, so this actually speaks to a slightly broader phenomenon, which is there are a number of cases where there is no one word that can get consensus. And in the body of an article, that’s usually okay, because we can explain the whole thing. You can come away with an understanding of why each side wants to use a certain word, but there are some aspects, like the page has to have a title, so there’s that. Same thing with certain things like photos. It’s like, well, there’s different photos, which one’s best? Lot of different views on that. But at the end of the day, you need the lead photo because there’s one slot for a lead photo. Categories is another one. So at one point, I have no idea if it’s in there today, but I don’t think so. I was listed in American entrepreneurs fine.

(00:51:03)
American atheist, and I said, that doesn’t feel right to me, just personally it’s true. I mean, wouldn’t disagree with the objective fact of it, but when you click the category and you see a lot of people who are, you might say American atheist activist because that’s their big issue. So Madalyne Murray O’Hair or various famous people who… Richard Dawkins, who make it a big part of their public argument and persona. But that’s not true of me. It’s just my private personal belief, it doesn’t really… it’s not something I campaign about. So it felt weird to put me in the category, but what category would you put? And do you need that? In this case I argued that doesn’t need that. I don’t speak about it publicly, except incidentally, from time to time, I don’t campaign about it. So it’s weird to put me with this group of people.

(00:51:54)
And that argument carried the day, I hope not just because it was me. But categories can be like that where you’re either in the category or you’re not. And sometimes it’s a lot more complicated than that. And is it, again, we go back to, is it undue weight? If someone who is now prominent in public life and generally considered to be a good person was convicted of something, let’s say DUI when they were young, we normally in normal discourse, we don’t think, oh, this person should be in the category of American criminals because you think, oh, a criminal. Yeah, technically speaking, it’s against the law to drive under the influence of alcohol and you were arrested and you spent a month in prison or whatever. But it’s odd to say that’s a criminal.

(00:52:45)
So just as an example in this area is Mark Wahlberg, Marky Mark is what I always think of him as, because that was his first sort of famous name, who I wouldn’t think should be listed as in the category, American criminal. Even though he did, he was convicted of quite a bad crime when he was a young person, but we don’t think of him as a criminal. Should the entry talk about that? Yeah, it’s actually an important part of his life story that he had a very rough youth and he could have gone down a really dark path and he turned his life around. That’s actually interesting. So categories are tricky.
Lex Fridman
(00:53:20)
Especially with people because we like to assign labels to people into ideas somehow, and those labels stick. And there’s certain words that have a lot of power, like criminal, like political left, right, center, anarchist, objectivist. What other philosophies are there? Marxist, communist, social democrat, democratic socialist, socialist, and if you add that as a category, all of a sudden it’s like, oh boy, you’re that guy now. And I don’t know if you want to be that guy.
Jimmy Wales
(00:53:58)
Well, there’s definitely some really charged ones like alt-right, I think it’s quite complicated and tough. It’s not completely meaningless label, but boy, I think you really have to pause before you actually put that label on someone, partly because now you’re putting them in a group of people, some of whom are quite, you wouldn’t want to be grouped with.

Wikipedia’s political bias

Lex Fridman
(00:54:20)
Let’s go into some, you mentioned the hot water of the pool that we’re both tipping a toe in. Do you think Wikipedia has a left leaning political bias, which is something it is sometimes accused of?
Jimmy Wales
(00:54:31)
Yeah, so I don’t think so, not broadly. And I think you can always point to specific entries and talk about specific biases, but that’s part of the process of Wikipedia. Anyone can come and challenge and to go on about that. But I see fairly often on Twitter, some quite extreme accusations of bias. And I think actually I don’t see it. I don’t buy that. And if you ask people for an example, they normally struggle and depending on who they are and what it’s about. So it’s certainly true that some people who have quite fringe viewpoints and who knows the full rush of history in 500 years, they might be considered to be pathbreaking geniuses. But at the moment, quite fringe views. And they’re just unhappy that Wikipedia doesn’t report on their fringe views as being mainstream. And that, by the way, goes across all kinds of fields.

(00:55:36)
I was once accosted on the street outside the TED Conference in Vancouver by a guy who was a homeopath who was very upset that Wikipedia’s entry on homeopathy basically says it’s pseudoscience. And he felt that was biased. And I said, “Well, I can’t really help you because we cite good quality sources to talk about the scientific status, and it’s not very good.” So it depends, and I think it’s something that we should always be vigilant about. But in general, I think we’re pretty good. And I think any time you go to any serious political controversy, we should have a pretty balanced perspective on whose saying what and what the views are and so forth. I would actually argue that the areas where we are more likely to have bias that persists for a long period of time are actually fairly obscure things, or maybe fairly non-political things.

(00:56:40)
I just give, it’s kind of a humorous example, but it’s meaningful. If you read our entries about Japanese anime, they tend to be very, very positive and very favorable because almost no one knows about Japanese anime except for fans. And so the people who come and spend their days writing Japanese anime articles, they love it. They kind of have an inherent love for the whole area. Now they’ll of course, being human beings, they have their internal debates and disputes about what’s better or not. But in general, they’re quite positive because nobody actually cares. On anything that people are quite passionate about, then hopefully there’s quite a lot of interesting stuff.

(00:57:20)
So I’ll give an example, a contemporary example where I think we’ve done a good job as of my most recent sort of look at it, and that is the question about the efficacy of masks during the COVID pandemic. And that’s an area where I would say the public authorities really jerked us all around a bit. In the very first days, they said, “Whatever you do, don’t rush on and buy masks.” And their concern was shortages in hospitals, fair enough. Later it’s like, no, everybody’s got to wear a mask everywhere. It really works really well. And then now I think it’s, the evidence is mixed, right? Masks seem to help, in my personal view, masks seem to help. They’re no huge burden. You might as well wear a mask in any environment where you’re with a giant crowd of people and so forth.

(00:58:13)
But it’s very politicized, that one, and it’s very politicized, where certainly in the US, much more so. I live in the UK, I live in London, I’ve never seen on the streets the kind of the thing that there’s a lot of reports of people actively angry because someone else is wearing a mask, that sort of thing in public. So because it became very politicized, then clearly if Wikipedia… No, so anyway, if you go to Wikipedia and you research this topic, I think you’ll find more or less what I’ve just said. Oh, actually after it’s all to this point in history, it’s mixed evidence like masks seemed to help, but maybe not as much as some of the authorities said. And here we are.

(00:58:56)
And that’s kind of an example where I think, okay, we’ve done a good job, but I suspect there are people on both sides of that very emotional debate who think, this is ridiculous. Hopefully we’ve got quality sources. So then hopefully those people who read this can say, oh, actually it is complicated. If you can get to the point of saying, okay, I have my view, but I understand other views and I do think it’s a complicated question, great, now we’re a little bit more mature as a society.
Lex Fridman
(00:59:24)
Well, that one is an interesting one because I feel like I hope that that article also contains the meta conversation about the politicization of that topic. To me, it’s almost more interesting than whether masks work or not, at least at this point. It’s like why masks became a symbol of the oppression of a centralized government. If you wear them, you’re a sheep that follows the mask control the mass hysteria of an authoritarian regime. And if you don’t wear a mask, then you are a science denier, anti- vaxxer, an alt-right, probably a Nazi.
Jimmy Wales
(01:00:07)
Exactly. And that whole politicization of society is just so damaging, and I don’t know, in the broader world, how do we start to fix that? That’s a really hard question.

Conspiracy theories

Lex Fridman
(01:00:21)
Well, at every moment, because you mentioned mainstream and fringe, there seems to be a tension here, and I wonder what your philosophy is on it because there’s mainstream ideas and there’s fringe ideas. You look at lab leak theory for this virus. That could be other things we can discuss where there’s a mainstream narrative where if you just look at the percent of the population or the population with platforms, what they say, and then what is a small percentage in opposition to that, and what is Wikipedia’s responsibility to accurately represent both the mainstream and the fringe, do you think?
Jimmy Wales
(01:01:05)
Well, I think we have to try to do our best to recognize both, but also to appropriately contextualize. And so this can be quite hard, particularly when emotions are high. That’s just a fact about human beings. I’ll give a simpler example, because there’s not a lot of emotion around it. Like our entry on the moon doesn’t say, some say the moon’s made of rocks, some say cheese, who knows? That kind of false neutrality is not what we want to get to. That doesn’t make any sense, but that one’s easy. We all understand. I think there is a Wikipedia entry called something like the moon is made of cheese, where it talks about this is a common sort of joke or thing that children say or that people tell to children or whatever. It’s just a thing. Everyone’s heard moon’s made of cheese, but nobody thinks, wow, Wikipedia is so one-sided it doesn’t even acknowledge the cheese theory. I say the same thing about flat Earth, again, very-
Lex Fridman
(01:02:08)
That’s exactly what I’m looking up right now.
Jimmy Wales
(01:02:09)
… very little controversy. We will have an entry about flat Earth, theorizing, flat Earth people. My personal view is most of the people who claim to be flat earthers are just having a laugh, trolling and more power to them, have some fun, but let’s not be ridiculous.
Lex Fridman
(01:02:31)
Then of course, for mostly human history, people believe that the Earth is flat, so the article I’m looking at is actually kind of focusing on this history. Flat Earth is an archaic and scientifically disproven conception of the Earth’s shape as a plain or disc, meaning ancient cultures subscribe to a flat Earth cosmography with pretty cool pictures of what a flat Earth would look like, with dragon, is that a dragon no angels on the edge. There’s a lot of controversy about that. What is it the edge? Is it the wall? Is it angels, is it dragons, is there a dome?
Jimmy Wales
(01:03:00)
And how can you fly from South Africa to Perth? Because on a flat Earth view, that’s really too far for any plane to make it because-
Lex Fridman
(01:03:09)
What I want to know-
Jimmy Wales
(01:03:10)
It’s all spread out.
Lex Fridman
(01:03:11)
What I want to know is what’s on the other side, Jimmy, what’s on the other side? That’s what all of us want to know. So I presume there’s probably a small section about the conspiracy theory of flat Earth, because I think there’s a sizeable percent of the population who at least will say they believe in a flat Earth.
Jimmy Wales
(01:03:31)
Yeah.
Lex Fridman
(01:03:32)
I think it is a movement that just says that the mainstream narrative to have distrust and skepticism about the mainstream narrative, which to a very small degree, is probably a very productive thing to do as part of the scientific process. But you can get a little silly and ridiculous with it.
Jimmy Wales
(01:03:49)
Yeah, I mean it’s exactly right. And so I think I find on many, many cases, and of course I, like anybody else, might quibble about this or that in any Wikipedia article, but in general, I think there is a pretty good sort of willingness and indeed eagerness to say, oh, let’s fairly represent all of the meaningfully important sides. So there’s still a lot to unpack in that, right? So meaningfully important. So people who are raising questions about the efficacy of masks, okay, that’s actually a reasonable thing to have a discussion about, and hopefully we should treat that as a fair conversation to have and actually address which authorities have said what and so on and so forth. And then there are other cases where it’s not meaningful opposition, you just wouldn’t say. I doubt if the main article Moon, it may mention cheese, probably not even because it’s not credible and it’s not even meant to be serious by anyone, or the article on the Earth certainly won’t have a paragraph that says, well, most scientists think it’s round, but certain people think flat.

(01:05:12)
That’s just a silly thing to put in that article. You would want to sort of address that’s an interesting cultural phenomenon. You want to put it somewhere. So this goes into all kinds of things about politics. You want to be really careful, really thoughtful about not getting caught up in the anger of our times and really recognize. Yes, I always thought… I remember being really kind of proud of the US at the time when it was McCain was running against Obama because I thought, “Oh, I’ve got plenty of disagreements with both of them, but they both seem like thoughtful and interesting people who I would have different disagreements with.” But I always felt like, yeah, that that’s good, now we can have a debate. Now we can have an interesting debate. And it isn’t just people slamming each other, personal attacks and so forth.
Jimmy Wales
(01:06:00)
It isn’t just people slamming each other with personal attacks and so forth.
Lex Fridman
(01:06:05)
You’re saying Wikipedia has also represented that?
Jimmy Wales
(01:06:09)
I hope so. Yeah, and I think so in the main. Obviously, you can always find debate that went horribly wrong because there’s humans involved.
Lex Fridman
(01:06:18)
But speaking of those humans, I would venture to guess, I don’t know the data, maybe you can let me know, but the personal political leaning of the group of people who had a Wikipedia probably leans left, I would guess. To me, the question there is, I mean the same is true for Silicon Valley, the task for Silicon Valley is to create platforms that are not politically biased even though there is a bias for the engineers who create it. I believe it’s possible to do that. There’s conspiracy theories that it somehow is impossible, and there’s this whole conspiracy where the left is controlling it, and so on. I think engineers, for the most part, want to create platforms that are open and unbiased that create all kinds of perspective because that’s super exciting to have all kinds of perspectives battle it out, but still is there a degree to which the personal political bias of the editors might seep in in silly ways and in big ways?

(01:07:22)
Silly ways could be, I think, hopefully I’m correct in saying this, but the right will call it the Democrat Party and the left will call it the Democratic Party, right? It always hits my ear weird. Are we children here? We’re literally taking words and just jabbing at each other. Yeah, I could capitalize a thing in a certain way, or I can just take a word and mess with them. That’s a small way of how you use words, but you can also have a bigger way about beliefs, about various perspectives on political events, on Hunter Biden’s laptop, on how big of a story that is or not, how big the censorship of that story is or not, and then there’s these camps to take very strong points and they construct big narratives around that. It’s a very sizable percent of the population believes the two narratives that compete with each other.
Jimmy Wales
(01:08:21)
Yeah. It’s really interesting and it’s hard to judge the sweep of history within your own lifetime, but it feels like it’s gotten much worse, that this idea of two parallel universes where people can agree on certain basic facts feels worse than it used to be. I’m not sure if that’s true or if it just feels that way, but I’m not sure what the causes are. I think I would lay a lot of the blame in recent years on social media algorithms, which reward clickbait headlines, which reward tweets that go viral, and they go viral because they’re cute and clever.

(01:09:13)
My most successful tweet ever by a fairly wide margin, some reporter tweeted at Elon Musk because he was complaining about Wikipedia or something, “You should buy Wikipedia,” and I just wrote, “Bot for sale,” and 90 zillion retweets, and people liked it, and it was all very good, but I’m like, “You know what? It’s cute line and it’s a good mic drop,” and all that, and I was pleased with myself. I’m like, “It’s not really a discourse.” It’s not really what I like to do, but it’s what social media really rewards, which is kind of a let’s you and him have a fight, and that’s more interesting. It’s funny because at the time, I was texting with Elon who’s very pleasant to me, and all of that.
Lex Fridman
(01:10:01)
He might have been a little bit shitty, the reporter might have been a little bit shitty, but you fed into the shitty with a snarky funny of response, “Not for sale,” and where do you… That’s a funny little exchange, and you can probably after that laugh it off and it’s fun, but that kind of mechanism that rewards the snark can go into viciousness.
Jimmy Wales
(01:10:22)
Yeah. Well, and we certainly see it online. A series of tweets, sort of a tweet thread of 15 tweets that assesses the quality of the evidence for masks, pros and cons, and sort of wear this, that’s not going to go viral, but a SmackDown for a famous politician who was famously in favor of mask, who also went to a dinner and didn’t wear a mask, that’s going to go viral, and that’s partly human nature. People love to call out hypocrisy and all of that, but it’s partly what these systems elevate automatically. I talk about this with respect to Facebook, for example. I think Facebook has done a pretty good job, although it’s taken longer than it should in some cases, but if you have a very large following and you’re really spouting hatred or misinformation, disinformation, they’ve kicked people off.

(01:11:24)
They’ve done some reasonable things there, but actually, the deeper issue of the anger we’re talking about, of the contentiousness of everything, I make of a family example with two great stereotypes. One, the crackpot racist uncle, and one, the sweet grandma. I always want to point out all of my uncles in my family were wonderful people, so I didn’t have a crackpot racist, but everybody knows the stereotype. Well, so grandma, she just posts sweet comments on the kids’ pictures and congratulates people on their wedding anniversary, and crackpot uncle’s posting his nonsense. Normally, it’s at Christmas dinner, everybody rolls their eyes, “Oh, yeah, Uncle Frank’s here, and he is probably going to say some racist comment and we’re going to tell him to shut up, or maybe let’s not invite him this year.” Normal human drama. He’s got his three mates down at the pub who listen to him and all of that, but now grandma’s got 54 followers on Facebook, which is the intimate family, and racist uncle has 714, so he’s not a massive influence or whatever, but how did that happen?

(01:12:36)
It’s because the algorithm notices when she posts, nothing happens. He posts and then everybody jumps in to go, “God, shut up, Uncle Frank. That’s outrageous,” and there’s engagement, there’s page views, there’s ads. Those algorithms, I think they’re working to improve that, but it’s really hard for them. It’s hard to improve that if that actually is working. If the people who are saying things that get engagement, if it’s not too awful, but it’s just, maybe it’s not a racist uncle, but maybe it’s an uncle who posts a lot about what an idiot Biden is, which isn’t necessarily an offensive or blockable or bannable thing, and it shouldn’t be, but if that’s the discourse that gets elevated because it gets a rise out of people, then suddenly in a society, it’s like, “Oh, we get more of what we reward,” so I think that’s a piece of what’s gone on.

Facebook

Lex Fridman
(01:13:28)
Well, if we could just take that tangent. I’m having a conversation with Mark Zuckerberg second time. Is there something you can comment on how to decrease toxicity on that particular platform, Facebook? You also have worked on creating a social network that is less toxic yourself, so can we just talk about the different ideas that these already big social network can do and what you have been trying to do?
Jimmy Wales
(01:13:55)
A piece of it is it’s hard. The problem with making a recommendation to Facebook is that I actually believe their business model makes it really hard for them, and I’m not anti-capitalism, I’m not, “Great. Somebody’s got business, they’re making money,” that’s not where I come from, but certain business models mean you are going to prioritize things that maybe aren’t longterm healthful, and so that’s a big piece of it. Certainly, for Facebook, you could say with vast resources, start to prioritize content that’s higher quality, that’s healing, that’s kind. Try not to prioritize content that seems to be just getting a rise out of people. Now, those are vague human descriptions, but I do believe good machine running algorithms, you can optimize in slightly different ways, but to do that, you may have to say, “Actually, we’re not necessarily going to increase page views to the maximum extent right now.”

(01:14:59)
I’ve said this to people at Facebook. It’s like if your actions are convincing people that you’re breaking Western civilization, that’s a really bad for business in the long run. Certainly, these days, I’ll say, Twitter is the thing that’s on people’s minds as being more upsetting at the moment, but I think it’s true. One of the things that’s really interesting about Facebook compared to a lot of companies is that Mark has a pretty unprecedented amount of power. His ability to name members of the board, his control of the company is pretty hard to break even if financial results aren’t as good as they could be because he’s taken a step back from the perfect optimization to say, “Actually, for the longterm health in the next 50 years of this organization, we need to reign in some of the things that are working for us in making money because they’re actually giving us a bad reputation.” One of the recommendations I would say is, and this is not to do with the algorithms and all that, but how about just a moratorium on all political advertising?

(01:16:11)
I don’t think it’s their most profitable segment, but it’s given rise to a lot of deep, hard questions about dark money, about ads that are run by questionable people that push false narratives, or the classic kind of thing is you run… I saw a study about Brexit in the UK where people were talking about there were ads run to animal rights activists saying, “Finally, when we’re out from under Europe, the UK can pass proper animal rights legislation. We’re not constrained by the European process.” Similarly, for people who are advocates of fox hunting to say, “Finally, when we’re out of Europe, we can re-implement…” You’re telling people what they want to hear, and in some cases, it’s really hard for journalists to see that, so it used to be that for political advertising, you really needed to find some kind of mainstream narrative, and this is still true to an extent, mainstream narrative that 60% of people can say, “Oh, I can buy into that,” which meant it pushed you to the center.

(01:17:20)
It pushed you to try and find some nuance balance, but if your main method of recruiting people is a tiny little one-on-one conversation with them, because you’re able to target using targeted advertising, suddenly you don’t need consistent. You just need a really good targeting operation, really good Cambridge analytic style machine learning algorithm data to convince people. That just feels really problematic, so until they can think about how to solve that problem, I would just say, “You know what? It’s going to cost us X amount,” but it’s going to be worth it to kind of say, “You know what? We actually think our political advertising policy hasn’t really helped contribute to discourse and dialogue in finding reasoned middle ground and compromise solutions, so let’s just not do that for a while until we figure that out,” so that’s maybe a piece of advice.
Lex Fridman
(01:18:15)
Coupled with, as you were saying, recommender systems for the newsfeed and other contexts that don’t always optimize engagement, but optimize the long term mental wellbeing and balance and growth of a human being, but it’s a very difficult problem.
Jimmy Wales
(01:18:33)
It’s a difficult problem. Yeah. With WT Social, WikiTribune Social, we’re launching in a few months time a completely new system, new domain, and new lots of things, but the idea is to say let’s focus on trust. People can rate each other as trustworthy, rate content as trustworthy. You have to start from somewhere so it’ll start with a core base of our tiny community who, I think, are sensible, thoughtful people, want to recruit more, but to say, “You know what? Actually, let’s have that as a pretty strong element,” to say let’s not optimize based on what gets the most paid views in this session, let’s optimize on what the feedback from people is, this is meaningfully enhancing my life. Part of that is, and it’s probably not a good business model, but part of that is say, “Okay, we’re not going to pursue an advertising business model, but a membership model where you don’t have to be a member, but you can pay to be a member.”

(01:19:36)
You maybe get some benefit from that, but in general, to say, actually the problem with… Actually, the division I would say is, and the analogy I would give is broadcast television funded by advertising gives you a different result than paying for HBO, paying for Netflix, paying for whatever. The reason is, if you think about it, what is your incentive as a TV producer? You’re going to make a comedy for ABC Network in the US, you basically say, “I want something that almost everybody will like and listen to,” so it tends to be a little blander, family-friendly, whatever. Whereas if you say, “Oh, actually,” I’m not going to use the HBO example, and an old example, you say, “You know what? Sopranos isn’t for everybody, Sex and the City isn’t for everybody, but between the two shows, we’ve got something for everybody that they’re willing to pay for,” so you can get edgier, higher quality in my own view content rather than saying it’s got to not offend anybody in the world. It’s got to be for everybody, which is really hard.

(01:20:47)
Same thing here in a social network. If your business model is advertising, it’s going to drive you in one direction. If your business model is membership, I think it drives you in a different direction. Actually, and I’ve said this to Elon about Twitter Blue, which I think wasn’t rolled out well and so forth, but the piece of that that I like is to say, look, actually, if there’s a model where your revenue is coming from people who are willing to pay for the service, even if it’s only part of your revenue, if it’s a substantial part, that does change your broader incentives to say, actually, are people going to be willing to pay for something that’s actually just toxicity in their lives? Now, I’m not sure it’s been rolled out well, I’m not sure how it’s going, and maybe I’m wrong about that as a plausible business model, but I do think it’s interesting to think about, just in broad terms, business model drives outcomes in sometimes surprising ways unless you really pause to think about it.

Twitter

Lex Fridman
(01:21:46)
If we can just link on Twitter and Elon before… I would love to talk to you about the underlying business model, Wikipedia, which is this brilliant, bold move at the very beginning, but since you mentioned Twitter, what do you think works? What do you think is broken about Twitter?
Jimmy Wales
(01:22:03)
It’s a long conversation, but to start with, one of the things that I always say is it’s a really hard problem, so I concede that right up front. I said this about the old ownership of Twitter and the new ownership of Twitter because unlike Wikipedia, and this is true actually for all social media, there’s a box, and the box basically says, “What do you think? What’s on your mind?” You can write whatever the hell you want, right? This is true, by the way, even for YouTube. I mean the box is to upload a video, but again, it’s just an open-ended invitation to express yourself.

(01:22:38)
What makes that hard is some people have really toxic, really bad, some people are very aggressive, they’re actually stalking, they’re actually abusive, and suddenly, you deal with a lot of problems. Whereas at Wikipedia, there is no box that says, “What’s on your mind?” There’s a box that says, “This is an entry about the moon. Please be neutral. Please set your facts.” Then there’s a talk page which is not coming rant about Donald Trump. If you go on the talk page of the Donald Trump entry and you just start ranting about Donald Trump, people would say, “What are you doing? Stop doing that. We’re not here to discuss. There’s a whole world of the internet out there for you to go and rant about Donald Trump.”
Lex Fridman
(01:23:17)
It’s just not fun to do on Wikipedia as somehow as fun on Twitter.
Jimmy Wales
(01:23:20)
Well, also on Wikipedia, people are going to say, “Stop,” and, “Actually, are you here to tell us how can we improve the article or are you just here to rant about Trump? Because that’s not actually interesting.” Because the goal is different, so that’s just admitting and saying upfront, this is a hard problem. Certainly, I’m writing a book on trust. The idea is, in the last 20 years, we’ve lost trust in all kinds of institutions, in politics. The Edelman Trust Barometer Survey has been done for a long time, and trust in politicians, trust in journalism, it’s come declined substantially, and I think in many cases, deservedly, so how do we restore trust and how do we think about that?
Lex Fridman
(01:24:07)
Does that also include trust in the idea of truth?
Jimmy Wales
(01:24:13)
Trust in the idea of truth. Even the concept of facts and truth is really, really important, and the idea of uncomfortable truths is really important. When we look at Twitter and we can see, okay, this is really hard, so here’s my story about Twitter. It’s a two-part story, and it’s all pre Elon Musk ownership. Many years back, somebody accused me of horrible crimes on Twitter, and like anybody would, I was like… I’m in the public eye. People say bad things. I don’t really… I brush it off, whatever, but I’m like, “This is actually really bad.” Accusing me of pedophilia? That’s just not okay, so I thought, “I’m going to report this,” so I click report, and I report the tweet and there’s five others, and I report, and I go through the process, and then I get an email that says whatever, a couple of hours later saying, “Thank you for your report. We’re looking into this.” Great. Okay, good.

(01:25:16)
Then several hours further, I get an email back saying, “Sorry, we don’t see anything here to violate our terms of use,” and I’m like, “Okay,” so I emailed Jack and I say, “Jack, come on. This is ridiculous,” and he emails back roughly saying, “Yeah, sorry, Jimmy. Don’t worry. We’ll sort this out.” I just thought to myself, “You know what? That’s not the point. I’m Jimmy Wales, I know Jack Dorsey. I can email Jack Dorsey. He’ll listen to me because he’s got an email from me and sorts it out for me.” What about the teenager who’s being bullied and is getting abuse and getting accusations that aren’t true? Are they getting the same kind of really poor result in that case? Fast-forward a few years, same thing happens. The exact quote, it goes, “Please help me. I’m only 10 years old, and Jimmy Wales raped me last week.” I was like, “Come on. Fuck off. That’s ridiculous,” so I report. I’m like, “This time I’m reporting,” but I’m thinking, “Well, we’ll see what happens.”

(01:26:15)
This one gets even worse because then I get a same result email back saying, “Sorry, we don’t see any problems,” so I raised it with other members of the board who I know, and Jack, and like, “This is really ridiculous. This is outrageous,” and some of the board members, friends of mine, sympathetic, and so good for them, but I actually got an email back then from the general counsel head of trust and safety saying, “Actually, there’s nothing in this tweet that violates our terms of service. We don’t regard and gave reference to the Me Too Movement. If we didn’t allow accusations, the Me Too Movement, it’s an important thing,” and I was like, “You know what? Actually, if someone says, ‘I’m 10 years old and someone raped me last week,’ I think the advice should be, ‘Here’s the phone number of the police.’ You need to get the police involved. Twitter’s not the place for that accusation.”

(01:27:05)
Even back then… By the way, they did delete those tweets, but the rationale they gave is spammy behavior, so completely separate from abusing me. It was just like, “Oh, well, they were retweeting too often.” Okay, whatever. That’s just broken. That’s a system that it’s not working for people in the public eye. I’m sure it’s not working for private people who get abuse. Really horrible abuse can happen. How is that today? Well, it hasn’t happened to me since Elon took over, but I don’t see why it couldn’t, and I suspect now if I send a report and email someone, there’s no one there to email me back because he’s gotten rid of a lot of the trust and safety staff, so I suspect that problem is still really hard.
Lex Fridman
(01:27:46)
Just content moderation at huge scales.
Jimmy Wales
(01:27:49)
At huge scales is really something. I don’t know the full answer to this. A piece of it could be to say, “Actually, making specific allegations of crimes, this isn’t the place to do that. We’ve got a huge database. If you’ve got an accusation of crime, here’s who should call, the police, the FBI, whatever it is. It’s not to be done in public,” and then you do face really complicated questions about Me Too Movement and people coming forward in public and all of that, but again, it’s like probably you should talk to a journalist. Probably there are better avenues than just tweeting from an account that was created 10 days ago, obviously set up to abuse someone. I think they could do a lot better, but I also admit it’s a hard problem.
Lex Fridman
(01:28:38)
There’s also ways to indirectly or more humorously or a more mocking way to make the same kinds of accusations. In fact, the accusations you mentioned, if I were to guess, don’t go that viral because they’re not funny enough or cutting enough, but if you make it witty and cutting and meme it somehow, sometimes actually indirectly making an accusation versus directly making an accusation, that can go viral and that can destroy reputations, and you get to watch yourself. Just all kinds of narratives take hold.
Jimmy Wales
(01:29:09)
Yeah, no, I remember another case that didn’t bother me because it wasn’t of that nature, but somebody was saying, “I’m sure you’re making millions off of Wikipedia,” and I’m like, “No, actually, I don’t even work there. I have no salary,” and they’re like, “You’re lying. I’m going to check your 990 form,” which is the US form for tax reporting for charities, and I was like, “Yeah, here’s the link. Go read it and you’ll see I’m listed as a board member, and my salary is listed as zero.” Things like that, it’s like, “Okay.” That one, that feels like you’re wrong, but I can take that and we can have that debate quite quickly.

(01:29:52)
Again, it didn’t go viral because it was kind of silly, and if anything would’ve gone viral, it was me responding, but that’s one where it’s like, actually, I’m happy to respond because a lot of people don’t know that I don’t work there and that I don’t make millions, and I’m not a billionaire. Well, they must know that because it’s in most news media about me, but the other one, I didn’t respond to publicly because it’s like Barbara Streisand effect. It’s like sometimes calling attention to someone who’s abusing you who basically has no followers and so on is just a waste.
Lex Fridman
(01:30:24)
And everything you’re describing now is just something that all of us have to learn because everybody’s in the public eye. I think when you have just two followers and you get bullied by one of the followers, it hurts just as much as when you have a large number, so it’s not… Your situation, I think it’s echoed in the situations of millions of other, especially teenagers and kids and so on.
Jimmy Wales
(01:30:43)
Yeah, no, it’s actually an example. We don’t generally use my picture and the banners anymore on Wikipedia, but we did, and then we did an experiment one year where we tried other people’s pictures, so one of our developers, and one lovely, very sweet guy, and he doesn’t look like your immediate thought of a nerdy Silicon Valley developer. He looks like a heavy metal dude because he’s cool. Suddenly, here he is with long hair and tattoos, and there’s his sort of say, “Here’s what your money goes for. Here’s my letter asking for support,” and he got massive abuse from Wikipedia, like calling him creepy, and really massive. This was being shown to 80 million people a day, his picture, not the abuse. The abuse was elsewhere on the internet. He was bothered by it.

(01:31:39)
I thought, “You know what? There is a difference.” I actually am in the public eye. I get huge benefits from being in the public eye. I go around and make public speeches. Any random thing I think of, I can write and get it published in the New York Times, and I have this interesting life. He’s not a public figure, and so actually he wasn’t mad at us. It was just like, actually, suddenly being thrust in the public eye and you get suddenly lots of abuse, which normally, I think if you’re a teenager and somebody in your class is abusing you, it’s not going to go viral. It’s going to be hurtful because it’s local and it’s your classmates or whatever, but when ordinary people go viral in some abusive way, it’s really, really quite tragic.
Lex Fridman
(01:32:24)
I don’t know. Even at a small scale, it feels viral. When five people at your school, and there’s a rumor, and there’s this feeling like you’re surrounded, and the feeling of loneliness, I think, which you’re speaking to when you at least feel like you don’t have a platform to defend yourself, and then this powerlessness, that I think a lot of teenagers definitely feel, and a lot of people-
Jimmy Wales
(01:32:49)
I think you’re right.
Lex Fridman
(01:32:51)
I think even when just two people make up stuff about you or lie about you or say mean things about you or bully you, that can feel like a crowd.
Jimmy Wales
(01:33:01)
Yeah. No, that’s true.
Lex Fridman
(01:33:03)
Whatever that is in our genetics and our biology and the way our brain works, that just can be a terrifying experience. Somehow, to correct that, I think because everybody feels the pain of that, everybody suffers the pain of that, I think we’ll be forced to fix that as a society, to figure out a way around that.
Jimmy Wales
(01:33:22)
I think it’s really hard to fix because I don’t think that problem isn’t necessarily new. Someone in high school who writes graffiti that says, “Becky is a slut,” and spreads a rumor about what Becky did last weekend, that’s always been damaging, it’s always been hurtful, and that’s really hard.
Lex Fridman
(01:33:45)
Those kinds of attacks, there is oldest time itself, they proceed the internet. Now, what do you think about this technology that feels Wikipedia like, which is community notes on Twitter? Do you like it? Pros and cons? Do you think it’s scalable?
Jimmy Wales
(01:34:00)
I do like it. I don’t know enough about specifically how it’s implemented to really have a very deep view, but I do think it’s quite… The uses I’ve seen of it, I’ve found quite good, and in some cases, changed my mind. It’s like I see something, and of course, the human tendency is to retweet something that you hope is true or that you are afraid is true, or it’s that kind of quick mental action. Then I saw something that I liked and agreed with, and then a community note under it that made me think, “Oh, actually, this is a more nuanced issue,” so I like that. I think that’s really important. Now, how is it specifically implemented? Is it scalable or that? I don’t really know how they’ve done it, so I can’t really comment on that, but in general, I do think when your only mechanisms on Twitter, and you’re a big Twitter user, we know the platform and you’ve got plenty of followers and all of that, the only mechanisms are retweeting, replying, blocking.

(01:35:13)
It’s a pretty limited scope, and it’s kind of good if there’s a way to elevate a specific thoughtful response. It kind of goes to, again, does the algorithm just pick the retweet or the… I mean retweeting, it’s not even the algorithm that makes it viral. If Paulo Coelho, very famous author, I think he’s got… I don’t know. I haven’t looked lately. He used to have eight million Twitter followers. I think I looked, he’s got 16 million now or whatever. Well, if he retweets something, it’s going to get seen a lot. Elon Musk, if he retweets something, it’s going to get seen a lot. That’s not an algorithm. That’s just the way the platform works. So, it is kind of nice if you have something else, and how that’s something else is designed, that’s obviously complicated question.
Lex Fridman
(01:35:58)
Well, there’s this interesting thing that I think Twitter is doing, but I know Facebook is doing for sure, which is really interesting. What are the signals that a human can provide at scale? In Twitter, it’s retweet. In Facebook, I think you can share. I think, yeah, but there’s basic interactions, you can have comment and so on, but there’s also, in Facebook, and YouTube has this too is, “Would you like to see more of this or would you like to see less of this?” They post that sometimes. The thing that the neural net that’s learning from that has to figure out is the intent behind you saying, “I want to see less of this.”

(01:36:39)
Did you see too much of this content already? You like it, but you don’t want to see so much of it. You already figured it out, great. Or does this content not make you feel good? There’s so many interpretations that I would like to see less of this, but if you get that kind of signal, this actually can create a really powerfully curated list of content that is fed to you every day that doesn’t create an echo chamber or a silo, that actually just makes you feel good in the good way, which it challenges you, but it doesn’t exhaust you and make you this weird animal.
Jimmy Wales
(01:37:20)
I’ve been saying for a long time, if I went on Facebook one morning and they said, Ooh, we’re testing a new option. Rather than showing you things we think you’re going to like, we want to show you some things that we think you will disagree with, but which we have some signals that suggest it’s of quality,” I’m like, “Now, that sounds interesting.”
Lex Fridman
(01:37:40)
Yeah, that sounds really interesting.
Jimmy Wales
(01:37:41)
I want to see something where… Oh, I don’t agree with… Larry Lessig is a good friend of mine, founder of Creative Commons, and he’s moved on to doing stuff about corruption and politics and so on. I don’t always agree with Larry, but I always grapple with Larry because he’s so interesting and he’s so thoughtful, that even when we don’t agree, I’m like, “Actually, I want to hear him out because I’m going to learn from it,” and that doesn’t mean I always come around to agree with him, but I’m going to understand a perspective, and that’s really great feeling.
Lex Fridman
(01:38:12)
Yeah, there’s this interesting thing on social media where people accuse others of saying, “Well, you don’t want to hear opinions that you disagree with or ideas you disagree with.” I think this is something that’s thrown at me all the time. The reality is there’s literally almost nothing I enjoy more.
Jimmy Wales
(01:38:29)
It seems an odd thing to accuse you of because you have quite a wide range of long conversations with a very diverse bunch of people.
Lex Fridman
(01:38:35)
But there is a very, very harsh drop off because what I like is high quality disagreement. That really makes me think. At a certain point, there’s a threshold, it’s a kind of a gray area when the quality of the disagreement, it just sounds like mocking, and you’re not really interested in a deep understanding of the topic, or you yourself don’t seem to carry deep understanding of the topic. There’s something called intelligence square debates that may-
Lex Fridman
(01:39:00)
There’s something called Intelligence Squared debates. The main one is the British version. With the British accent, everything always sounds better. And the Brits seem to argue more intensely, like they’re invigorated, they’re energized by the debate. Those people I often disagree with, basically everybody involved, and it’s so fun. I learned something. That’s high quality. If we could do that, if there’s some way for me to click a button that says, “Filter out lower quality just today,” just sometimes show it to me because I want to be able to, but today I’m just not in the mood for the mockery.

(01:39:38)
Just high quality stuff, because even flat Earth, I want to get high quality arguments for the flat Earth. It would make me feel good because I would see, “Oh, that’s really interesting. I never really thought in my mind to challenge the mainstream narrative of general relativity, of a perception of physics. Maybe all of reality, maybe all of space is an illusion. That’s really interesting. I never really thought about, let me consider that fully. Okay, what’s the evidence? How would you test that? What are the alternatives? How would you be able to have such consistent perception of a physical reality, if it’s all of it is an illusion? All of us seem to share the same kind of perception of reality,” that’s the kind of stuff I love, but not the mockery of it that cheap, that it seems that social media can inspire.
Jimmy Wales
(01:40:34)
Yeah. I talk sometimes about how people assume that the big debates in Wikipedia or the arguments are between the party of the left and the party of the right. And I would say no, it’s actually the party of the kind and thoughtful and the party of the jerks, is really it. Left and yeah, yeah, bring me somebody I disagree with politically. As long as they’re thoughtful, kind, we’re going to have a real discussion. I give an example of our article on abortion: so, if you can bring together a kind and thoughtful Catholic priest and a kind and thoughtful Planned Parenthood activist and they’re going to work together on the article on abortion, that can be a really great thing, if they’re both kind and thoughtful. That’s the important part. They’re never going to agree on the topic, but they will understand, okay, Wikipedia is not going to take a side, but Wikipedia is going to explain what the debate is about, and we’re going to try to characterize it fairly.

(01:41:36)
And it turns out your kind and thoughtful people, even if they’re quite ideological, like a Catholic priest is generally going to be quite ideological on the subject of abortion, but they can grapple with ideas and they can discuss, and they may feel very proud of the entry at the end of the day, not because they suppress the other side’s views, but because they think the case has been stated very well that other people can come to understand it. And if you’re highly ideological, you assume, I think naturally, “If people understood as much about this as I do, they’ll probably agree with me.” You may be wrong about that, but that’s often the case. So, that’s what I think we need to encourage more of in society generally, is grappling with ideas in a really thoughtful way.

Building Wikipedia

Lex Fridman
(01:42:21)
So is it possible if the majority of volunteers, editors of Wikipedia really disliked Donald Trump, are they still able to write an article that empathizes with the perspective of, for time at least, a very large percentage of the United States that were supported of Donald Trump, and to have a full broad representation of him as a human being, him as a political leader, him as a set of policies promised and implemented, all that kind of stuff?
Jimmy Wales
(01:42:55)
Yeah, I think so. And I think if you read the article, it’s pretty good. And I think a piece of that is within our community, if people have the self-awareness to understand. So, I personally wouldn’t go and edit the entry on Donald Trump. I get emotional about it and I’m like, “I’m not good at this,” and if I tried to do it, I would fail. I wouldn’t be a good Wikipedian, so it’s better if I just step back and let people who are more dispassionate on this topic edit it. Whereas there are other topics that are incredibly emotional to some people where I can actually do quite well. I’m going to be okay. Maybe we were discussing earlier the efficacy of masks. I’m like, “Oh, I think that’s an interesting problem. And I don’t know the answer, but I can help catalog what’s the best evidence and so on.”

(01:43:48)
I’m not going to get upset. I’m not going to get angry, able to be a good Wikipedian, so I think that’s important. And I do think though in a related framework that the composition of the community is really important. Not because Wikipedia is or should be a battleground, but because blind spots, like maybe I don’t even realize what’s biased if I’m particularly of a certain point of view, and I’ve never thought much about it. So one of the things we focus on a lot, the Wikipedia volunteers are, we don’t know the exact number, but let’s say 80% plus male, and they’re a certain demographic: they tend to be college educated, heavier on tech geeks than not, et cetera. So, there is a demographic to the community, and that’s pretty much global. Somebody said to me once, “Why is it only white men who edit Wikipedia?”, and I said, “You’ve obviously not met the Japanese Wikipedia community.”

(01:44:51)
It’s a joke because the broader principle still stands, who edits Japanese Wikipedia? A bunch of geeky men, and women as well. So, we do have women in the community, and that’s very important. But we do think, “Okay, you know what, that does lead to some problems,” it leads to some content issues simply because people write more about what they know and what they’re interested in. They’ll tend to be dismissive of things as being unimportant if it’s not something that they personally have an interest in. I like the example, as a parent I would say our entries on early childhood development probably aren’t as good as they should be because a lot of the Wikipedia volunteers… Actually we’re getting older, the Wikipedians, so that demographic has changed a bit. But if you’ve got a bunch of 25 year old tech geek dudes who don’t have kids, they’re just not going to be interested in early childhood development. And if they tried to write about it, they probably wouldn’t do a good job, ’cause they don’t know anything about it.

(01:45:53)
And somebody did a look at our entries on novelists who’ve won a major literary prize, and they looked at the male novelist versus the female, and the male novelists had longer and higher quality entries. And why is that? Well, it’s not because, ’cause I know hundreds of Wikipedian, it’s not because these are a bunch of biased, sexist men who like, “Books by women are not important.” No. Actually, there is a gender breakdown of readership. There are books, like hard science fiction’s a classic example, hard science fiction: mostly read by men. Other types of novels, more read by women. And if we don’t have women in the community, then these award-winning clearly important novelists may have less coverage. And not because anybody consciously thinks, “We don’t like a book by Maya Angelou. Who cares? She’s a poet. That’s not interesting.”

(01:46:55)
No, but just because, well, people write what they know, they write what they’re interested in it. So, we do think diversity in the community is really important. And that’s one area where I do think it’s really clear. But I can also say, actually that also applies in the political sphere, to say, actually, we do want kind and thoughtful Catholic priests, kind and thoughtful conservatives, kind and thoughtful libertarians, kind and thoughtful Marxists to come in. But the key is the kind and thoughtful piece, so when people sometimes come to Wikipedia outraged by some dramatic thing that’s happened on Twitter, they come to Wikipedia with a chip on their shoulder ready to do battle, and it just doesn’t work out very well.
Lex Fridman
(01:47:38)
And there’s tribes in general where I think there’s a responsibility on the larger group to be even kinder and more welcoming to the smaller group.
Jimmy Wales
(01:47:48)
Yeah, we think that’s really important. And so oftentimes, people come in and there’s a lot… When I talk about community health, one of the aspects of that that we do think about a lot, that I think about a lot is not about politics. It’s just like, how are we treating newcomers to the community? And so, I can tell you what our ideals are, what our philosophy is, but do we live up to that? So the ideal is you come to Wikipedia, we have rules. One of our fundamental rules is ignore all rules, which is partly written that way because it piques people’s attention, like, “Oh, what the hell kind of rule is that?” But basically says, “Look, don’t get nervous and depressed about a bunch of what’s the formatting of your footnote?”, so you shouldn’t come to Wikipedia, add a link, and then get banned or yelled at because it’s not the right format.

(01:48:46)
Instead, somebody should go, “Oh, hey. Yeah, thanks for helping, but here’s the link to how to format. If you want to keep going, you might want to learn how to format a footnote,” and to be friendly and to be open and to say, “Oh, right, oh, you’re new and you clearly don’t know everything about Wikipedia,” and sometimes in any community, that can be quite hard. So, people come in and they’ve got a great big idea, and they’re going to propose this to the Wikipedia community, and they have no idea. That’s basically a perennial discussion we’ve had 7,000 times before. And so then ideally, you would say to the person, “Oh yeah, great, thanks.” A lot of people have, and here’s where we got to and here’s the nuanced conversation we’ve had about that in the past that I think you’ll find interesting, and sometimes people are just like, “Oh God, another one, who’s come in with this idea which doesn’t work, and they don’t understand why.”
Lex Fridman
(01:49:39)
You can lose patience, but you shouldn’t.
Jimmy Wales
(01:49:40)
And that’s human, but I think it just does require really thinking in a self-aware manner of, “Oh, I was once a newbie.” Actually, I just did an interview with Emily Temple Woods, she was Wikipedian of the year, she’s just like a great, well-known Wikipedian. And I interviewed her for my book and she told me something I never knew, apparently it’s not secret, she didn’t reveal it to me, but it’s that when she started Wikipedia, she was a vandal. She came in and vandalized Wikipedia. And then basically what happened was she’d vandalized a couple of articles, and then somebody popped up on her talk page and said, “Hey, why are you doing this? We’re trying to make an encyclopedia here, and and this wasn’t very kind.”

(01:50:29)
And she felt so bad. She’s like, “Oh, right. I didn’t really think of it that way.” She just was coming in, and she was 13 years old, combative and having fun, and trolling a bit. And then she’s like, “Oh, actually, I see your point,” and became a great Wikipedian. So that’s the ideal really, is that you don’t just go throw a block, “Fuck off.” You go, “Hey, what gives?”, which is I think the way we tend to treat things in real life, if you’ve got somebody who’s doing something obnoxious in your friend group, you probably go, “Hey, really, I don’t know if you’ve noticed, but I think this person is actually quite hurt that you keep making that joke about them.” And then they usually go, “Oh, I thought that was okay,” and then they stop, or they keep it up and then everybody goes, “Well, you’re the asshole.”
Lex Fridman
(01:51:21)
Well, yeah, that’s just an example that gives me faith in humanity that we’re all capable and wanting to be kind to each other. And in general, the fact that there’s a small group of volunteers, they’re able to contribute so much to the organization, the collection, the discussion of all of human knowledge is so it makes me so grateful to be part of this whole human project. That’s one of the reasons I love Wikipedia is gives me faith in humanity.
Jimmy Wales
(01:51:53)
Yeah, no, I once was at Wikimania is our annual conference and people come from all around the world, really active volunteers. I was at the dinner, we were in Egypt at Wikimania and Alexandria at the closing dinner or whatever, and a friend of mine came and sat at the table, and she’s been in the movement more broadly, creative commons, she’s not really a Wikipedian, she’d come to the conference because she’s into creative commons and all that. So we have dinner, and it just turned out I sat down at the table with most of the members of the English language arbitration committee, and they’re a bunch of very sweet, geeky Wikipedians.

(01:52:31)
And as we left the table, I said to her, “I still find this sense of amazement, we just had dinner with some of the most powerful people in English language media,” because they’re the people who are the final court of appeal in English Wikipedia. And thank goodness they’re not media moguls. They’re just a bunch of geeks who are just well-liked in the community because they’re kind and they’re thoughtful and they really think about things. I was like, “This is great. Love Wikipedia.”
Lex Fridman
(01:53:01)
To the degree that geeks run the best aspect of human civilization brings me joy in all aspects. And this is true programming, like Linux programmers, people that kind of specialize in a thing, and they don’t really get caught up into the mess of the bickering of society. They just do their thing, and they value the craftsmanship of it, the competence of it.
Jimmy Wales
(01:53:29)
Yeah. If you’ve never heard of this or looked into it, you’ll enjoy it, I read something recently that I didn’t even know about, but the fundamental time zones, and they change from time to time. Sometimes, a country will pass daylight savings or move it by a week, whatever. There’s a file that’s on all Unix based computers, and basically all computers end up using this file, it’s the official time zone file. But why is it official? It’s just this one guy. It’s like this guy and a group of community around him.

(01:54:04)
And basically, something weird happened and it broke something because he was on vacation. And I’m just like, isn’t that wild that you would think… First of all, most people never even think about how do computers know about time zones? Well, they know because they just use this file which tells all the time zones and which dates they change and all of that. But there’s this one guy, and he doesn’t get paid for it. With all the billions of people on the planet, he put his hand up and goes, “Yo, I’ll take care of the time zones.”
Lex Fridman
(01:54:36)
And there’s a lot of programmers listening to this right now with PTSD about time zones. On top of this one guy, there’s other libraries, the different programming languages that help manage the time zones for you. But still, within those, it’s amazing just the packages, the libraries, how few people build them out of their own love for building, for creating, for community and all of that. I almost like don’t want to interfere with the natural habitat of the geek. When you spot him in the wild, you just want to be like, “Well, careful, that thing needs to be treasured.”

(01:55:16)
No, I met a guy many years ago, lovely, really sweet guy, and he was running a bot on English Wikipedia that I thought, “Wow, that’s actually super clever.” And what he had done is his bot was like spell checking, but rather than simple spell checking, what he had done is create a database of words that are commonly mistaken for other words. They’re spelled wrong, so I can’t even give an example. And so, the word is people often spell it wrong, but no spell checker catches it because it is another word. And so, what he did is he wrote a bot that looks for these words and then checks the sentence around it for certain keywords. So in some context, this isn’t correct, but buoy and boy: people sometimes type B-O-Y when they mean B-O-U-Y, so if he sees the word boy, B-O-Y in an article, he would look in the context and see, is this a nautical reference? And if it was, he didn’t autocorrect, he just would flag it up to himself to go, “Oh, check this one out.”

(01:56:23)
And that’s not a great example, but he had thousands of examples, and I was like, “That’s amazing. I would’ve never thought to do that.” And I’m glad that somebody did. And that’s also part of the openness of the system, and also I think being a charity, being this idea of actually, this is a gift to the world that makes someone go, “Oh, well, I’ll put my hand up. I see a little piece of things I can make better because I’m a good programmer and I can write this script to do this thing, and I’ll find it fun,” amazing.

Wikipedia funding

Lex Fridman
(01:56:55)
Well, I got to ask about this big, bold decision at the very beginning to not do advertisements on the website. And just in general, the philosophy of the business model of Wikipedia, what went behind that?
Jimmy Wales
(01:57:06)
Yeah, so I think most people know this, but we’re a charity, so in the US, registered as a charity. And we don’t have any ads on the site. And the vast majority of the money is from donations, but the vast majority from small donors. So, people giving $25 or whatever.
Lex Fridman
(01:57:29)
If you’re listening to this, go donate.
Jimmy Wales
(01:57:31)
Go donate.
Lex Fridman
(01:57:31)
Donate now.
Jimmy Wales
(01:57:33)
$25.
Lex Fridman
(01:57:33)
I’ve donated so many times
Jimmy Wales
(01:57:34)
And we have millions of donors every year, but it’s a small percentage of people. I would say in the early days, a big part of it was aesthetic, almost as much as anything else. It was just like, “I don’t really want ads in Wikipedia. There’s a lot of reasons why it might not be good.” And even back then, I didn’t think as much as I have since about a business model can tend to drive you in a certain place, and really thinking that through in advance is really important because you might say, “Yeah, we’re really, really keen on community control and neutrality,” but if we had an advertising based business model, probably that would begin to erode. Even if I believe in it very strongly, organizations tend to follow the money in the DNA in the long run.

(01:58:25)
And so things like, it’s easy to think about some of the immediate problems. So if you go to read about, I don’t know, Nissan car company, and if you saw an ad for the new Nissan at the top of the page, you might be like, “Did they pay for this?”, or, “Do the advertisers have influence over the content?”, because of wonder about that for all kinds of media.
Lex Fridman
(01:58:53)
And that undermines trust.
Jimmy Wales
(01:58:55)
Undermines trust, right. But also, things like we don’t have clickbait headlines in Wikipedia. You’ve never seen Wikipedia entries with all these kind of listicles, “The 10 funniest cat pictures, number seven will make you cry,” none of that kind of stuff, because there’s no incentive, no reason to do that. Also, there’s no reason to have an algorithm to say, “Actually, we’re going to use our algorithm to drive you to stay on the website longer. We’re going to use the algorithm to drive you to…”, It’s like, “Oh, you’re reading about Queen Victoria. There’s nothing to sell you when you’re reading about Queen Victoria. Let’s move you on to Las Vegas because actually, the ad revenue around hotels in Las Vegas is quite good,” so there’s no incentive for the organization to go, “Oh, let’s move people around to things that have better ad revenue.”

(01:59:48)
Instead, it’s just like, “Oh, well, what’s most interesting to the community?,” just to make those links. So, that decision just seemed obvious to me, but as I say, it was less of a business decision and more of an aesthetic. It’s like, “I like Wikipedia that doesn’t have ads.” In these early days, a lot of the ads, that was well before the era of really quality ad targeting and all that, so you got a lot of-
Lex Fridman
(02:00:18)
Banners.
Jimmy Wales
(02:00:18)
Banners, punch the monkey ads and all that kind of nonsense. But there was no guarantee. It was not really clear, how could we fund this? It was pretty cheap. It still is quite cheap compared to most. We don’t have 100,000 employees and all of that, but would we be able to raise money through donations? And so, I remember the first time that we really did a donation campaign was on a Christmas Day in 2003, I think it was. We had three servers, database servers, and two front end servers, and they were all the same size or whatever, and two of them crashed. They broke, I don’t even know, remember now, the hard drive. It was Christmas Day, so I scrambled on Christmas Day to go onto the database server, which fortunately survived, and have it become a front end server as well. And then, the site was really slow and it wasn’t working very well.

(02:01:28)
And I was like, “Okay, it’s time. We need to do a fundraiser,” and so I was hoping to raise $20,000 in a month’s time, but we raised nearly $30,000 within two, three weeks time. So that was the first proof point of, “Oh, we put a batter up and people will donate,” we just explained we need the money. And we were very small back then, and people were like, “Oh yeah, I love this. I want to contribute.” Then over the years, we’ve become more sophisticated about the fundraising campaigns, and we’ve tested a lot of different messaging and so forth. What we used to think, I remember one year we really went heavy with, “The idea of Wikipedia is a free encyclopedia for every single person on the planet. So what about the languages of Sub-Saharan Africa?”

(02:02:20)
So I thought, “Okay, we’re trying to raise money. We need to talk about that because it’s really important and near and dear to my heart,” and just instinctively knowing nothing about charity fundraising, you see it all around, it’s like, oh, charity’s always mentioned the poor people they’re helping, so let’s talk about. Didn’t really work as well. This is very vague and very broad, but the pitch that works better than any other in general is a fairness pitch of, “You use it all the time, you should probably chip in.” And most people are like, “Yeah, you know what? My life would suck without Wikipedia. I use it constantly and whatever. I should chip in, it just seems like the right thing to do.”

(02:03:02)
And there’s many variants on that, obviously. And it works. And people are like, “Oh yeah, Wikipedia, I love Wikipedia, and I shouldn’t.” So sometimes people say, “Why are you always begging for money on the website?”, and it’s not that often, it’s not that much, but it does happen. They’re like, “Why don’t you just get Google and Facebook and Microsoft, why don’t they pay for it?”, and I’m like, “I don’t think that’s really the right answer.”
Lex Fridman
(02:03:34)
Influence starts to creep in.
Jimmy Wales
(02:03:35)
Influence starts to creep in, and questions start to creep in. The best funding for Wikipedia is the small donors. We also have major donors. We have high net worth people who donate, but we always are very careful about that sort of thing to say, “Wow, that’s really great and really important, but we can’t let that become influence because that would just be really quite not good for Wikipedia.”
Lex Fridman
(02:04:01)
I would love to know how many times I’ve visited Wikipedia, how much time I’ve spent on it, because I have a general sense that it’s the most useful site I’ve ever used, competing maybe with Google search, which ultimately lands on Wikipedia.
Jimmy Wales
(02:04:01)
Yeah, right.
Lex Fridman
(02:04:20)
But if I would just be reminded of like, “Hey, remember all those times your life was make better because of the site?”, I think I would be much more like, “Yeah, why did I waste money on site X, Y, Z when I should be giving a lot of it here?”
Jimmy Wales
(02:04:33)
Well, the Guardian newspaper has a similar model, which is they have ads. There’s no paywall, but they just encourage people to donate, and they do that. I’ve sometimes seen a banner saying, “Oh, this is your 134th article you’ve read this year, would you like to donate?” And I think it’s effective-
Lex Fridman
(02:04:55)
[inaudible 02:04:55].
Jimmy Wales
(02:04:54)
… they’re testing. But also, I wonder just for some people, if they just don’t feel like guilty and then think, “Oh, I shouldn’t bother them so much.” I don’t know. It’s a good question. I don’t know the answer.
Lex Fridman
(02:05:06)
I guess that’s the thing I could also turn on, ’cause that would make me… I feel like legitimately, there’s some sites, this speaks to our social media discussion: Wikipedia unquestionably makes me feel better about myself if I spend time on it. There’s some websites where I’m like, if I spend time on Twitter, sometimes I’m like, I regret. I think Elon talks about this, minimize the number of regretted minutes. My number of regretted minutes on Wikipedia is zero. I don’t remember a time… I’ve just discovered this. I started following on Instagram, a page, depthsofwikipedia.
Jimmy Wales
(02:05:46)
Oh, yeah.
Lex Fridman
(02:05:47)
There’s crazy Wikipedia pages. There’s no Wikipedia page that [inaudible 02:05:51]-
Jimmy Wales
(02:05:51)
Yeah, I gave her a media contributor of the year award this year because she’s so great.
Lex Fridman
(02:05:55)
Yeah, she’s amazing.
Jimmy Wales
(02:05:57)
Depthsofwikipedia is so fun.
Lex Fridman
(02:05:59)
Yeah, that’s the interesting point that I don’t even know if there’s a competitor. There may be the programming, Stack Overflow type of websites, but everything else, there’s always a trade-off. It’s probably because of the ad driven model because there’s an incentive to pull you into clickbait, and Wikipedia has no clickbait. It’s all about the quality of the knowledge and the wisdom.
Jimmy Wales
(02:06:22)
Yeah. No, that’s right. And I also Stack Overflow. Although I wonder what you think of this, so I only program for fun as a hobby, and I don’t have enough time to do it, but I do, and I’m not very good at it. So therefore, I end up on Stack Overflow quite a lot trying to figure out what’s gone wrong. And I have really transitioned to using ChatGPT much more for that because I can often find the answer clearly explained, and it works better than sifting through threads, and I feel bad about that because I do love Stack Overflow and their community. I’m assuming, I haven’t read anything about in the news about it, but I’m assuming they are keenly aware of this, and they’re thinking about, “How can we use this chunk of knowledge that we’ve got here and provide a new type of interface where you can query it with a question and actually get an answer that’s based on the answers that we’ve had?” I don’t know.
Lex Fridman
(02:07:19)
Mm-hmm. And I think Stack Overflow currently has policies against using GPT. There’s a contentious kind of tension.
Jimmy Wales
(02:07:28)
Of course, yeah.
Lex Fridman
(02:07:29)
But they’re trying to figure that out.
Jimmy Wales
(02:07:30)
Well, and so we are similar in that regard. Obviously, all the things we’ve talked about like ChatGPT makes stuff up and it makes up references, so our community has already put into place some policies about it. But roughly speaking, there’s always more nuance. But roughly speaking, it’s, you the human are responsible for what you put into Wikipedia. So, if you use ChatGPT, you better check it, ’cause there’s a lot of great use cases of like, “Oh, well, I’m not a native speaker of German, but I am pretty good,” I’m not talking about myself, a hypothetical me that’s pretty good, and I just want to run my edit through ChatGPT in German to go make sure my grammar’s okay. That’s actually cool.

ChatGPT vs Wikipedia

Lex Fridman
(02:08:15)
Does it make you sad that people might use, increasingly use ChatGPT for something where they would previously use Wikipedia? So basically, use it to answer basic questions about the Eiffel Tower?
Jimmy Wales
(02:08:32)
Yeah. No-
Lex Fridman
(02:08:32)
And where the answer really comes at the source of it from Wikipedia, but they’re using this as an interface.
Jimmy Wales
(02:08:38)
Yeah. No, that’s completely fine. Part of it is our ethos has always been, “Here’s our gift of the world. Make something,” so if the knowledge is more accessible to people, even if they’re not coming through us, that’s fine. Now, obviously we do have certain business model concerns, and where we’ve had more conversation about this, this whole GPT thing is new, things like if you ask Alexa, “What is the Eiffel Tower?”, and she reads you the first two sentences from Wikipedia and doesn’t say it’s from Wikipedia, and they’ve recently started citing Wikipedia, then we worry, “Oh, if people don’t know they’re getting the knowledge from us, are they going to donate money? Or are they just going to think, oh, what’s Wikipedia for? I can just ask Alexa.” It’s like, well, Alexa only knows anything because she read Wikipedia. So we do think about that, but it doesn’t bother me in the sense of like, oh, I want people to always come to Wikipedia first.

(02:09:33)
But we had a great demo, literally just hacked together over a weekend by our head of machine learning where he did this little thing to say, you could ask any question, and he was just knocking it together, so he used OpenAI’s API just to make a demo, asked a question, “Why do ducks fly south for winter?”, which is the kind of thing you think, “Oh, I might just Google for that, or I might start looking in Wikipedia. I don’t know.” And so what he did, he asked ChatGPT, “What are some Wikipedia entries that might answer this?” Then, he grabbed those Wikipedia entries, said, “Here’s some Wikipedia entries. Answer this question based only on the information in this,” and he had pretty good results, and it prevented the making stuff up. Now, it’s just he hacked it together on a weekend, but what it made me think about was, “Oh, okay, so now we’ve got this huge body of knowledge that in many cases you’re like, oh I really I want to know about Queen Victoria. I’m just going to go read the Wikipedia entry and it’s going to take me through her life and so forth.”

(02:10:44)
But other times, you’ve got a specific question, and maybe we could have a better search experience where you can come to Wikipedia, ask your specific question, get your specific answer that’s from Wikipedia, including links to the articles you might want to read next. And that’s just a step forward. That’s just using new type of technology to make the extraction of information from this body of text into my brain faster and easier. So, I think that’s cool.
Lex Fridman
(02:11:10)
I would love to see a ChatGPT grounding into websites like Wikipedia. And the other comparable website to me will be like Wolfram Alpha for more mathematical knowledge, that kind of stuff. So, taking you to a page that is really crafted as opposed to the moment you start actually taking you to journalist websites like news websites, it starts getting a little iffy, because you’re now in a land that has a wrong incentive.
Jimmy Wales
(02:11:44)
Right, yeah.
Lex Fridman
(02:11:45)
You’re pulled in.
Jimmy Wales
(02:11:45)
Yeah, and you need somebody to have filtered through that and tried to knock off the rough edges. Yeah, I think that’s exactly right. And I think that kind of grounding, I think they’re working really hard on it. I think that’s really important-
Jimmy Wales
(02:12:00)
… is, I think they’re working really hard on it. I think that’s really important. And that actually… So if you ask me to step back and be like very business-like about our business model and where’s it going to go for us, and are we going to lose half our donations because everybody’s just going to stop coming to Wikipedia and go to ChatGPT? Well, grounding will help a lot because frankly, most questions people have, if they provide proper links, we’re going to be at the top of that, just like we are in Google. So we’re still going to get tons of recognition and tons of traffic just from… Even if it’s just the moral properness of saying, “Here’s my source.” So I think we’re going to be all right in that.
Lex Fridman
(02:12:39)
Yeah, in the close partnership if the model is fine-tuned, is constantly retrained that Wikipedia is one of the primary places where if you want to change what the model knows, one of the things you should do is contribute to Wikipedia or clarify Wikipedia.
Jimmy Wales
(02:12:53)
Yeah, yeah. No, that’s [inaudible 02:12:55].
Lex Fridman
(02:12:54)
Or elaborate, expand, all that kind of stuff.

Larry Sanger

Jimmy Wales
(02:12:56)
Yeah.
Lex Fridman
(02:12:57)
You mentioned all of us have controversies. I have to ask, do you find the controversy of whether you are the sole founder or the co-founder of Wikipedia ironic, absurd, interesting, important? What are your comments?
Jimmy Wales
(02:13:13)
I would say unimportant. Not that interesting. I mean, one of the things that people are sometimes surprised to hear me say is I actually think Larry Sanger doesn’t get enough credit for his early work in Wikipedia, even though I think co-founder’s not the right title for that. So he had a lot of impact and a lot of great work, and I disagree about a lot of things since and all that, and that’s fine. So yeah. No, to me that’s like, it’s one of these things that the media love a falling out story, so they want to make a big deal out of it, and I’m just like, yeah, no.
Lex Fridman
(02:13:51)
So there’s a lot of interesting engineering contributions in the early days, like you were saying, there’s debates about how to structure it, what the heck is this thing that we’re doing? And there’s important people that contributed to that.
Jimmy Wales
(02:14:02)
Yeah, definitely.
Lex Fridman
(02:14:03)
So he also, you said you’ve had some disagreements. Larry Sanger said that nobody should trust Wikipedia, and that Wikipedia seems to assume that there’s only one legitimate, defensible version of the truth on any controversial question. That’s not how Wikipedia used to be. I presume you disagree with that analysis.
Jimmy Wales
(02:14:21)
Yeah. I mean, just straight up, I disagree. Go and read any Wikipedia entry on a controversial topic, and what you’ll see is a really diligent effort to explain all the relevant sides. So yeah, just disagree.
Lex Fridman
(02:14:32)
So on controversial questions, you think perspectives are generally represented?
Jimmy Wales
(02:14:36)
Yeah.
Lex Fridman
(02:14:37)
Because it has to do with the tension between the mainstream and the non-mainstream that we were talking about.
Jimmy Wales
(02:14:43)
Yeah. No, I mean for sure. To take this area of discussion seriously is to say, yeah, you know what? Actually that is a big part of what Wikipedia and spend their time grappling with is to say, how do we figure out whether a less popular view is pseudoscience? Is it just a less popular view that’s gaining acceptance in the mainstream? Is it fringe versus crackpot, et cetera, et cetera? And that debate is what you’ve got to do. There’s no choice about having that debate of grappling with something. And I think we do. And I think that’s really important. And I think if anybody said to the Wikipedia community, “Gee, you should stop covering minority viewpoints on this issue,”

(02:15:39)
I think they would say, “I don’t even understand why you would say that. We have to grapple with minority viewpoints in science and politics and so on.” And this is one of the reasons why there is no magic simple answer to all these things. It’s really contextual. It’s case by case. It’s like you’ve got to really say, okay, what is the context here? How do you do it? And you’ve always got to be open to correction and to change and to challenge and always be sort of serious about that.
Lex Fridman
(02:16:13)
I think what happens, again, with social media is when there is that grappling process in Wikipedia and a decision is made to remove a paragraph or to remove a thing or to say a thing, you’re going to notice the one direction of the oscillation of the grappling and not the correction. And you’re going to highlight that and say, how come this person… I don’t know, maybe legitimacy of elections that’s the thing that comes up. Donald Trump maybe previously-
Jimmy Wales
(02:16:42)
Yeah, I can give a really good example, which is, there was this sort of dust up about the definition of recession in Wikipedia. The accusation was often quite ridiculous and extreme, which is, under pressure from the Biden administration Wikipedia changed the definition of recession to make Biden look good, or we did it not under pressure, but because we’re a bunch of lunatic leftists and so on. And then when I see something like that in the press, I’m like, “Oh dear, what’s happened here? How do we do that?” Because I always just accept things for five seconds first, and then I go and I look and I’m like, “You know what? That’s literally completely not what happened.” What happened was, one editor thought the article needed restructuring. So the article is always said, so the traditional kind of loose definition of recession is two quarters of negative growth, but there’s always been within economics, within important agencies and different countries around the world, a lot of nuance around that.

(02:17:43)
And there’s other factors that go into it and so forth. And then it’s just an interesting complicated topic. And so the article has always had the definition of two quarters. And the only thing that really changed was moving that from the lead, from the top paragraph to further down. And then news stories appeared saying, “Wikipedia has changed the definition of recession.” And then we got a huge rush of trolls coming in. So the article was temporarily protected, I think, only semi protected, and people were told, “Go to the talk page to discuss.” So anyway, it was a dust up that was… When you look at it as a Wikipedian, you’re like, “Oh, this is a really routine kind of editorial debate.” Another example, which unfortunately our friend Elon fell for, I would say, is the Twitter files. So there was an article called the Twitter files, which is about these files that were released once Elon took control of Twitter, and he released internal documents.

Twitter files


(02:18:36)
And what happened was somebody nominated it for deletion, but even the nomination said, “This is mainly about the Hunter Biden laptop controversy, shouldn’t this information be there instead?” So anyone can… It takes exactly one human being anywhere on the planet to propose something for deletion, and that triggers a process where people discuss it, which within a few hours, it was what we call snowball closed i.e, this doesn’t have a snowball’s chance in hell of passing. So an admin goes, “Yeah, wrong,” and closed the debate, and that was it. That was the whole thing that happened. And so nobody proposed suppressing the information. Nobody proposed it wasn’t important, it was just editorially boring internal questions. So sometimes people read stuff like that and they’re like, “Oh, you see, look at these leftists. They’re trying to suppress the truth again.” It’s like, well slow down a second and come and look, literally, it’s not what happened.
Lex Fridman
(02:19:36)
So I think the right is more sensitive to censorship, and so they will more likely highlight there’s more virality to highlighting something that looks like censorship in any walks of life. And this moving a paragraph from one place to another, or removing it and so on, as part of the regular grappling of Wikipedia can make a hell of a good article or YouTube video.
Jimmy Wales
(02:20:01)
Oh, yeah. Yeah. No, it sounds really in enticing and intriguing and surprising to most people because they’re like, “Oh, no, I’m reading Wikipedia. It doesn’t seem like a crackpot leftist website. It seems pretty kind of dull, really in its own geeky way.” And so that makes a good story. It’s like, oh, am I being misled? Because there’s a shadowy cabal of Jimmy Wales.
Lex Fridman
(02:20:25)
I generally, I read political stuff. I mentioned to you that I’m traveling to have some very difficult conversation with high profile figures both in the war in Ukraine and in Israel and Palestine. And I read the Wikipedia articles around that, and I also read books on the conflict and the history of the different regions. And I find the Wikipedia articles to be very balanced, and there’s many perspectives being represented. But then I ask myself, “Well, am I one of them leftist crackpots?” They can’t see the truth. I mean, it’s something I ask myself all the time, forget the leftist, just crackpot in general. Am I just being a sheep and accepting it? And I think that’s an important question to always ask, but not too much.
Jimmy Wales
(02:21:12)
Yeah. No, I agree.
Lex Fridman
(02:21:12)
A little bit, but not too much.
Jimmy Wales
(02:21:15)
Yeah. No, I think we always have to challenge ourselves of what do I potentially have wrong?

Government and censorship

Lex Fridman
(02:21:20)
Well, you mentioned pressure from government. You’ve criticized Twitter for giving in to Turkey’s government censorship. There’s also conspiracy theories or accusations of Wikipedia being open to pressure from government to government organizations, FBI and all this kind of stuff. What is the philosophy about pressure from government and censorship?
Jimmy Wales
(02:21:50)
So we’re super hardcore on this. We’ve never bowed down to government pressure anywhere in the world, and we never will. And we understand that we’re hardcore. And actually there is a bit of nuance about how different companies respond to this, but our response has always been just to say no. And if they threaten to block, well, knock yourself out, you’re going to lose Wikipedia. And that’s been very successful for us as a strategy because governments know they can’t just casually threaten to block Wikipedia or block us for two days, and we’re going to cave in immediately to get back into the market. And that’s what a lot of companies have done. And I don’t think that’s good that we can go one level deeper and say, I’m actually quite sympathetic. If you have staff members in a certain country and they are at physical risk, you’ve got to put that into your equation.

(02:22:43)
So I understand that. If Elon said, “Actually, I’ve got a hundred staff members on the ground in such and such a country, and if we don’t comply, somebody’s going to get arrested. And it could be quite serious.” Okay, that’s a tough one. That’s actually really hard. But yeah, no. And then the FBI one, no, the criticism I saw. I kind of prepared for this because I saw people responding to your request for questions, and I was like, somebody’s like, “Oh, well, don’t you think it was really bad that you da da da, da?” I actually reached out to [inaudible 02:23:18] and said, “Can you just make sure I’ve got my facts right?” And the answer is, we received zero requests of any kind from the FBI or any of the other government agencies for any changes to content in Wikipedia. And had we received those requests at the level of the Wikipedia Foundation, we would’ve said, “We can’t do anything because Wikipedia is written by the community.”

(02:23:40)
And so the Wikimedia Foundation can’t change the content of Wikipedia without causing… I mean, God, that would be a massive controversy, you can’t even imagine. What we did do, and this is what I’ve done, I’ve been to China and met with the Minister of Propaganda. We’ve had discussions with governments all around the world, not because we want to do their bidding, but because we don’t want to do their bidding, but we also don’t want to be blocked. And we think actually having these conversations are really important. There’s no threat of being blocked in the US. That’s just never going to happen. There is the First Amendment. But in other countries around the world, it’s like, “Okay, what are you upset about? Let’s have the conversation. Let’s understand, and let’s have a dialogue about it so that you can understand where we come from and what we’re doing and why.”

(02:24:26)
And then sometimes it’s like, gee, if somebody complains that something’s bad in Wikipedia, whoever they are, don’t care who they are. It could be you, it could be the government, it could be the Pope. I don’t care who they are. It’s like, oh, okay. Well, our responsibility as Wikipedia is to go, “Oh, hold on, let’s check is that right or wrong? Is there something that we’ve got wrong in Wikipedia? Not because you’re threatening to block us, but because we want Wikipedia to be correct.” So we do have these dialogues with people. And a big part of what was going on with, you might call it pressure on social media companies or dialogue with, as we talked earlier, grapple with the language depending on what your view is. In our case, it was really just about, oh, okay, they want to have a dialogue about COVID information, misinformation.

(02:25:22)
We are this enormous source of information which the world depends on. We’re going to have that conversation. We’re happy to say, here’s… If they say, how do you know that Wikipedia is not going to be pushing some crazy anti-vax narrative first? I mean, I think it’s somewhat inappropriate for a government to be asking pointed questions in a way that implies possible penalties. I’m not sure that ever happened because we would just go, I don’t know, the Chinese blocked us. So it goes, right? We’re not going to cave into any kind of government pressure, but whatever the appropriateness of what they were doing, I think there is a rule for government in just saying, let’s understand the information ecosystem. Let’s think about the problem of misinformation, disinformation in society, particularly around election security, all these kinds of things. So I think it would be irresponsible of us to get a call from a government agency and say, “Yeah, why don’t you just fuck off? You’re the government.” But it would also be irresponsible to go, “Oh, dear, government agent’s not happy. Let’s fix Wikipedia so the FBI loves us.”
Lex Fridman
(02:26:35)
And when you say you want to have discussions with the Chinese government or with organizations like CDC and WHO, it’s to thoroughly understand what the mainstream narrative is so that it can be properly represented, but not drive what the articles are?
Jimmy Wales
(02:26:50)
Well, it’s actually important to say whatever the Wikimedia Foundation thinks has no impact on what’s in Wikipedia. So it’s more about saying to them, “We understand you’re the World Health Organization, or you’re whoever, and part of your job is to… Public health is about communications. You want to understand the world.” So it’s more about, “Well, let’s explain how Wikipedia works.”
Lex Fridman
(02:27:18)
So it’s more about explaining how Wikipedia works and like, “Hey, it’s the volunteers”?
Jimmy Wales
(02:27:22)
Yeah, exactly.
Lex Fridman
(02:27:23)
It’s a battle of ideas, and here’s how the sources are used.
Jimmy Wales
(02:27:29)
Yeah, exactly.
Lex Fridman
(02:27:30)
What are the legitimate sources and what not a legitimate source is.
Jimmy Wales
(02:27:32)
Yeah, exactly.
Lex Fridman
(02:27:33)
I mean, I suppose there’s some battle about what is a legitimate source. There could be statements made that CDC… There’s government organizations in general have sold themselves to be the place where you go for expertise. And some of that has been to small degree, raised in question over the response to the pandemic.
Jimmy Wales
(02:27:57)
Well, I think in many cases, and this goes back to my topic of trust. So there were definitely cases of public officials, public organizations where I felt like they lost the trust of the public because they didn’t trust the public. And so the idea is, we really need people to take this seriously and take actions, therefore, we’re going to put out some overblown claims because it’s going to scare people into behaving correctly. You know what? That might work for a little while, but it doesn’t work in the long run because suddenly people go from a default stance of… Like the Center for Disease Control, very well respected scientific organization. I don’t know. They’ve got fault in Atlanta with the last file of smallpox or whatever it is that people think about them. And to go, “Oh, right, these are scientists we should actually take seriously and listen to, and they’re not politicized.”

(02:28:58)
It’s like, okay. And if you put out statements, and I don’t know if the CDC did, but Who Health Organization, whoever, that are provably false and also provably, you kind of knew they were false, but you did it to scare people because you wanted them to do the right thing. It’s like, no, you know what? That’s not going to work in the long run. You’re going to lose people, and now you’ve got a bigger problem, which is a lack of trust in science, a lack of trust in authorities who are, by and large, they’re like quite boring government bureaucrat scientists who just are trying to help the world.
Lex Fridman
(02:29:31)
Well, I’ve been criticized, and I’ve been torn on this. I’ve been criticized for criticizing Anthony Fauci too hard. The degree to which I criticized him is because he’s a leader. And I’m just observing the effect in the loss of trust in the institutions like the NIH that where I personally know there’s a lot of incredible scientists doing incredible work, and I have to blame the leaders for the effects on the distrust and the scientific work that they’re doing because of what I perceive as basic human flaws of communication, of arrogance, of ego, of politics, all those kinds of things. Now, you could say, “You’re being too harsh,” possible, but I think that’s the whole point of free speech is you can criticize people who lead. Leaders, unfortunately or fortunately, are responsible for the effects on society.

(02:30:28)
To me, Anthony Fauci or whoever in the scientific position around the pandemic had an opportunity to have a FDR moment or to get everybody together, inspire about the power of science to rapidly develop a vaccine that saves us from this pandemic and future pandemic that can threaten the wellbeing of human civilization. This was epic and awesome and sexy. And to me, when I’m talking to people about science, it’s anything but sexy in terms of the virology and biology development because it’s been politicized. It’s icky, and people just don’t want to… “Don’t talk to me about the vaccine. I understand. I understand. I got vaccinated.” There’s just, “Let’s switch topics quick.”
Jimmy Wales
(02:31:11)
Yeah, yeah. Well, it’s interesting because as I say, I live in the UK and I think all these things are a little less politicized there. And I haven’t paid close enough attention to Fauci to have a really strong view. I’m sure I would disagree with some things. I remember hearing at the beginning of the pandemic as I’m unwrapping my Amazon package with these masks I bought because I heard there’s a pandemic. And I just was like, “I want some N95 mask, please.” And they were saying, “Don’t buy masks.” And the motivation was because they didn’t want there to be shortages in hospitals. Fine. But there were also statements of masks, they’re not effective and they won’t help you. And then the complete about phase two, you’re ridiculous if you’re not wearing a… It’s just like, no, that about face just lost people from day one.
Lex Fridman
(02:32:06)
The distrust in the intelligence of the public to deal with nuance, to deal with the uncertainty.
Jimmy Wales
(02:32:11)
Yeah. This is exactly what… I think this is where the Wikipedia neutral point of view is and should be in ideally. And obviously every article and everything we could… You know me now and you know how I am about these things, but ideally, it’s to say, look, we’re happy to show you all the perspectives. This is Planned Parenthood’s view, and this is Catholic Church view, and we’re going to explain that, and we’re going to try to be thoughtful and put in the best arguments from all sides, because I trust you. You read that and you’re going to be more educated and you’re going to begin to make a decision. I mean, I can just talk in the UK, the government, da, da, da. When we found out in the UK that very high level government officials were not following the rules they had put on everyone else. I had just become a UK citizen just a little while before the pandemic, and it’s kind of emotional. You get a passport in a new country and you feel quite good.

(02:33:09)
I did my oath to the Queen, and then they dragged the poor old lady out to tell us all to be good. I was like, “We’re British and we’re going to do the right things, and it’s going to be tough, but going to…” So you have that kind of Dunkirk spirit moment, and you’re following the rules to a T, and then suddenly it’s like, well, they’re not following the rules. And so suddenly I shifted personally from, “I’m going to follow the rules, even if I don’t completely agree with them, but I’ll still follow because I think we’ve got to all chip in together,” to, “You know what? I’m going to make wise and thoughtful decisions for myself and my family.” And that generally is going to mean following the rules. But it’s basically when they’re at certain moments in time, you’re not allowed to be in an outside space unless you’re exercising. I’m like, I think I can sit in a park and read a book. It’s going to be fine. That’s irrational rule, which I would’ve been following just personally of like, I’m just going to do the right thing.
Lex Fridman
(02:34:06)
And the loss of trust, I think, at scale was probably harmful to science. And to me, the scientific method and the scientific community is one of the biggest hopes, at least to me, for the survival and the thriving of human civilization.
Jimmy Wales
(02:34:22)
Absolutely. And I think you see some of the ramifications of this. There’s always been pretty anti-science, anti-vax people. That’s always been a thing, but I feel like it’s bigger now simply because of that lowering of trust. So a lot of people, maybe it’s like you say, a lot of people are like, “Yeah, I got vaccinated, but I really don’t want to talk about this because it’s so toxic.” And that’s unfortunate because I think people should say, “What an amazing thing.” There’s also a whole range of discourse around if this were a disease that was primarily killing babies, I think people’s emotions about it would’ve been very different, right or wrong. Then the fact that when you really looked at the death rate of getting COVID, wow, it’s really dramatically different. If you’re late in life, this was really dangerous. And if you’re 23 years old, yeah, well, it’s not great. And long COVID is a thing and all of that. And I think some of the public communications, again, were failing to properly contextualize it. Not all of it. It’s a complicated matter, but yeah.

Adolf Hitler’s Wikipedia page

Lex Fridman
(02:35:45)
Let me read you a Reddit comment that received two likes.
Jimmy Wales
(02:35:48)
Oh, two whole people liked it.
Lex Fridman
(02:35:52)
Yeah, two people liked it. And I don’t know, maybe you can comment on whether there’s truth to it, but I just found it interesting because I’ve been doing a lot of research on World War II recently. So this is about Hitler.
Jimmy Wales
(02:36:06)
Oh, okay.
Lex Fridman
(02:36:06)
It’s a long statement. “I was there when a big push was made to fight bias at Wikipedia. Our target became getting the Hitler article to be Wiki’s featured article. The idea was that the voting body only wanted articles that were good PR and especially articles about socially liberal topics. So the Hitler article had to be two to three times better and more academically researched to beat the competition. This bias seems to hold today, for example, the current list of political featured articles at a glance seems to have only two books, one on anarchism and one on Karl Marx. Surely we’re not going to say there have only ever been two articles about political non-biography books worth being featured, especially compared to 200 plus video games. And that’s the only topics with good books are socialism and anarchy.” Do you have any interesting comments on this kind of-
Jimmy Wales
(02:36:06)
Oh, yeah.
Lex Fridman
(02:37:00)
[inaudible 02:37:00] featured, how the featured is selected, maybe Hitler, because he is a special figure [inaudible 02:37:09] kind of stuff.
Jimmy Wales
(02:37:09)
I love that. No, I love the comparison to how many video games, and that definitely speaks to my earlier is like, if you’ve got a lot of young geeky men who really like video games, that doesn’t necessarily get you to the right place in every respect. Certainly. Yeah. So here’s a funny story. I woke up one morning to a bunch of journalists in Germany trying to get in touch with me because German language, Wikipedia chose to have as the featured article of the day, Swastika. And people were going crazy about it, and some people were saying, “It’s illegal. Has German Wikipedia been taken over by Nazi sympathizers,” and so on? And it turned out it’s not illegal, discussing the swastika. Using the swastika as a political campaign and using it in certain ways is illegal in Germany in a way that it wouldn’t be in the US because the First Amendment, but in this case, it was like actually part of the point is the swastika symbol is from other cultures as well.

(02:38:17)
I just thought it was interesting. I did joke to the community, I’m like, “Please don’t put the swastika on the front page without warning me because I’m going to get [inaudible 02:38:25].” It wouldn’t be me, it’s the foundation. I’m not that much on the front lines. So I would say that to put Hitler on the front page of Wikipedia, it is a special topic. And you would want to say, “Yeah, let’s be really careful that it’s really, really good before we do that,” because if we put it on the front page and it’s not good enough, that could be a problem. There’s no inherent reason. Clearly, World War II is a very popular topic in Wikipedia. It’s like, turn on the history channel. People, it’s a fascinating period of history that people are very interested in. And then on the other piece, like anarchism and Karl Marx.
Lex Fridman
(02:39:05)
Karl Marx. Yeah.
Jimmy Wales
(02:39:06)
Oh, yeah. I mean, that’s interesting. I’m surprised to hear that not more political books or topics have made it to the front page.
Lex Fridman
(02:39:15)
Now we’re taking this Reddit a comment.
Jimmy Wales
(02:39:16)
I mean, as if-
Lex Fridman
(02:39:17)
That’s face value.
Jimmy Wales
(02:39:18)
… it’s completely… But I’m trusting. So I think that’s probably is right. They probably did have the list up. No, I think that piece… The piece about how many of those featured articles have been video games, and if it’s disproportionate, I think the community should go, “Actually, what’s gone? That doesn’t seem quite right.” I mean, you can imagine that because you’re looking for an article to be on the front page of Wikipedia, you want to have a bit of diversity in it. You want it to be not always something that’s really popular that week, so I don’t know, the last couple of weeks, maybe succession, the big finale of succession might lead you think, oh, let’s put succession on the front page, that’s going to be popular. In other cases, you kind of want to pick something super obscure and quirky because people also find that interesting and fun. Yeah, I don’t know. But you don’t want it to be video games most of the time. That sounds quite bad.
Lex Fridman
(02:40:17)
Well, let me ask you just as somebody who’s seen the whole thing, the development of the millions of articles. Big impossible question, what’s your favorite article?
Jimmy Wales
(02:40:33)
My favorite article? Well, I’ve got an amusing answer, which is possibly also true. There’s an article in Wikipedia called Inherently Funny Words, and one of the reasons I love it is when it was created early in the history of Wikipedia, it kind of became like a dumping ground. People would just come by and write in any word that they thought sounded funny. And then it was nominated for deletion because somebody’s like, “This is just a dumping ground. People are putting all kinds of nonsense in.” And in that deletion debate, somebody came forward and said essentially, “Wait a second, hold on. This is actually a legitimate concept in the theory of humor and comedy. And a lot of famous comedians and humorists have written about it.” And it’s actually a legitimate topic. So then they went through and they meticulously referenced every word that was in there and threw out a bunch that weren’t.

(02:41:29)
And so it becomes this really interesting. And now my biggest disappointment, and it’s the right decision to make because there was no source, but it was a picture of a cow, but there was a rope around its head tying on some horns onto the cow. So it was kind of a funny looking picture. It looked like a bull with horns, but it’s just a normal milk cow. And below it, the caption said, “According to some, cow is an inherently funny word,” which is just hilarious to me, partly because the “According to some” sounds a lot like Wikipedia, but there was no source. So it went away, and I know I feel very sad about that, but I’ve always liked that. And actually the reason Depths of Wikipedia amuses me so greatly is because it does highlight really interesting obscure stuff, and you’re like, “Wow, I can’t believe somebody wrote about that in Wikipedia. It’s quite amusing.” And sometimes there’s a bit of rye humor in Wikipedia. There’s always a struggle. You’re not trying to be funny, but occasionally a little inside humor can be quite healthy.
Lex Fridman
(02:42:40)
Apparently words with the letter K are funny. There’s a lot of really well researched stuff on this page. It’s actually exciting. And I should mention for Depths of the Wikipedia, it’s run by Annie Rauwerda.
Jimmy Wales
(02:42:56)
That’s right, Annie.
Lex Fridman
(02:42:57)
And let me just read off some of the pages. Octopolis and Octlantis-
Jimmy Wales
(02:43:05)
Oh yeah, that was…
Lex Fridman
(02:43:05)
… are two separate non-human underwater settlements built by the gloomy octopuses in Jarvis Bay East Australia. The first settlement named Octopolis by a biologist was founded in 2009. The individual structures in Octopolis consists of borrows around a piece of human detritus believed to be scrap metal, and it goes on in this way.
Jimmy Wales
(02:43:29)
That’s great.
Lex Fridman
(02:43:30)
Satiric misspelling, least concerned species. Humans were formally assessed as a species of least concern in 2008. I think Hitchhiker’s Guide to the Galaxy would slightly disagree. And the last one, let me just say, friendship paradox is the phenomena first observed by the sociologist Scott Feld in 1991, that on average an individual’s friends have more friends than that individual.
Jimmy Wales
(02:43:58)
Oh, that’s really interesting.
Lex Fridman
(02:43:58)
That’s very lonely.
Jimmy Wales
(02:44:00)
That’s the kind of thing that makes you want to… It sounds implausible at first because shouldn’t everybody have on average, about the same number of friends as all their friends? So you really want to dig into the math of that and really think, oh, why would that be true?
Lex Fridman
(02:44:13)
And it’s one way to feel more lonely in a mathematically rigorous way. Somebody else on Reddit asks, “I would love to hear some war stories from behind the scenes.” Is there something that we haven’t mentioned that was particularly difficult in this entire journey you’re on with Wikipedia?
Jimmy Wales
(02:44:32)
I mean, yeah, it’s hard to say. So part of what I always say about myself is that I’m a pathological optimist, so I always think everything is fine. And so things that other people might find a struggle, I’m just like, “Oh, well, this is the thing we’re doing today.” So that’s kind of about me, and it’s actually… I’m aware of this about myself, so I do like to have a few pessimistic people around me to keep me a bit on balance. I mean, I would say some of the hard things, I mean, there were hard moments like when two…
Jimmy Wales
(02:45:00)
I would say some of the hard things. I mean, there were hard moments when two out of three servers crashed on Christmas Day and then we needed to do a fundraiser and no idea what was going to happen. I would say as well, in that early period of time, the growth of the website and the traffic to the website was phenomenal and great. The growth of the community and in fact the healthy growth of the community was fine.

(02:45:29)
And then the Wikimedia Foundation, the nonprofit I set up to own and operate Wikipedia as a small organization, it had a lot of growing pains. That was the piece that’s just many companies or many organizations that are in a fast growth. It’s like you’ve hired the wrong people, or there’s this conflict that’s arisen and nobody has got experience to do this and all that. So, no specific stories to tell, but I would say growing the organization was harder than growing the community and growing the website, which is interesting.
Lex Fridman
(02:46:02)
Well, yeah. It’s kind of miraculous and inspiring that a community can emerge and be stable, and that has so much kind of productive, positive output. Kind of makes you think. It’s one of those things you don’t want to analyze too much because you don’t want to mess with a beautiful thing, but it gives me faith in communities. I think that they can spring up in other domains as well.
Jimmy Wales
(02:46:29)
Yeah, I think that’s exactly right. At Fandom, my for-profit wiki company where it’s all these communities about pop culture mainly, sort of entertainment, gaming and so on, there’s a lot of small communities. So, I went last year to our Community Connect conference and just met some of these people, and here’s one of the leaders of the Star Wars wiki, which is called Wookieepedia, which I think is great. And he’s telling me about his community and all that. And I’m like, “Oh, right. Yeah, I love this.”

(02:47:03)
So, it’s not the same purpose as Wikipedia of a neutral, high quality encyclopedia, but a lot of the same values are there of like, “Oh, people should be nice to each other.” It’s like when people get upset, just remember we’re working on Star Wars wiki together, there’s no reason to get too outraged. And just kind people just, just geeky people with a hobby.

Future of Wikipedia

Lex Fridman
(02:47:27)
Where do you see Wikipedia in 10 years, 100 years, and 1,000 years?
Jimmy Wales
(02:47:35)
Right. So, 10 years, I would say pretty much the same. We’re not going to become TikTok with entertainment deals, scroll by video humor, and blah-blah-blah, and encyclopedia. I think in 10 years, we probably will have a lot more AI supporting tools like I’ve talked about, and probably your search experience would be you can ask a question and get the answer rather than from our body of work.
Lex Fridman
(02:48:09)
So, search and discovery, a little bit improved, interface, some of that.
Jimmy Wales
(02:48:12)
Yeah, all that. I always say one of the things that most people won’t notice, because already they don’t notice it, is the growth of Wikipedia in the languages of the developing world. So, you probably don’t speak Swahili, so you’re probably not checking out that Swahili Wikipedia is doing very well, and it is doing very well. And I think that kind of growth is actually super important. It’s super interesting, but most people won’t notice that.
Lex Fridman
(02:48:41)
If we can just link on that if we could, do you think there’s so much incredible translation work is being done with AI, with language models? Do you think that can accelerate Wikipedia?
Jimmy Wales
(02:48:55)
Yeah, I do.
Lex Fridman
(02:48:55)
So, you start with the basic draft of the translation of articles and then build on top of that.
Jimmy Wales
(02:49:00)
What I used to say is machine translation for many years wasn’t much used to the community, because it just wasn’t good enough. As it’s gotten better, it’s tended to be a lot better in what we might call economically important languages, that’s because the corpus that they train on and all of that.

(02:49:20)
So, to translate from English to Spanish, if you’ve tried Google Translate recently Spanish to English is what I would do, it’s pretty good. It’s actually not bad. It used to be half a joke and then for a while it was kind of like, “Well, you can get the gist of something.” And now, actually, it’s pretty good. However, we’ve got a huge Spanish community who write in native Spanish, so they’re able to use it and they find it useful, but they’re writing.

(02:49:44)
But if you tried to do English to Zulu where there’s not that much investment, there’s loads of reasons to invest in English-Spanish, because they’re both huge, economically important languages. Zulu not so much. So, for those smaller languages, it was just still terrible. My understanding is it’s improved dramatically and also because the new methods of training don’t necessarily involve identical corpuses to try to match things up, but rather reading and understanding with tokens and large language models, and then reading and understanding, and then you get a much richer …

(02:50:22)
Anyway, apparently it’s quite improved, so I think that now, it is quite possible that these smaller language communities are going to say, “Oh, well finally, I can put something in an English and I can get out Zulu that I feel comfortable sharing with my community because it’s actually good enough, or I can edit it a bit here and there.” So, I think that’s huge. So, I do think that’s going to happen a lot and that’s going to accelerate, again, what will remain to most people an invisible trend, but that’s the growth in all these other languages. So, then move on to 100 years.
Lex Fridman
(02:50:52)
I was starting to get scary.
Jimmy Wales
(02:50:54)
Well, the only thing I’d say about 100 years is we’ve built the Wikimedia Foundation, and we run it in a quite cautious, and financially conservative, and careful way. So, every year, we build our reserves. Every year, we put aside a little bit of more money. We also have the endowment fund, which we just passed 100 million, that’s a completely separate fund with a separate board. So, it’s not just a big fat bank account for some future profligate CEO to blow through. The foundation will have to get the approval of a second order board to be able to access that money, and that board can make other grants through the community and things like that.

(02:51:38)
So, the point of all that is I hope and believe that we are building in a financially stable way that we can weather various storms along the way, so that hopefully we’re not taking the kind of risks. And by the way, we’re not taking too few risks either. That’s always hard. I think the Wikimedia Foundation and Wikipedia will exist in 100 years if anybody exists in 100 years, we’ll be there.
Lex Fridman
(02:52:06)
Do you think the internet just looks a predictably different, just the web?
Jimmy Wales
(02:52:11)
I do. I think right now, this sort of enormous step forward we’ve seen and has become public in the last year of the large language models really is something else. It’s really interesting. You and I have both talked today about the flaws and the limitations, but still it’s … As someone who’s been around technology for a long time, it’s sort of that feeling of the first time I saw a web browser, the first time I saw the iPhone, the first time the internet was really usable on a phone. And it’s like, “Wow, that’s a step change difference.” There’s a few other …
Lex Fridman
(02:52:48)
Maybe a Google Search.
Jimmy Wales
(02:52:49)
Google Search was actually one.
Lex Fridman
(02:52:51)
I remember the first Search.
Jimmy Wales
(02:52:51)
Because I remember Alta Vista was kind of cool for a while, then it just got more and more useless, because the algorithm wasn’t good. And it’s like, “Oh, Google Search, now I like the internet, it works again.” And so, large language model, it feels like that to me. Like, “Oh, wow, this is something new and really pretty remarkable.” And it’s going to have some downsides. The negative use case …

(02:53:14)
People in the area who are experts, they’re giving a lot of warnings. I’m not that worried, but I’m a pathological optimist. But I do see some really low-hanging fruit bad things that can happen. My example is, how about some highly customized spam where the email that you receive isn’t just misspelled words and trying to get through filters, but actually as a targeted email to you that knows something about you by reading your LinkedIn profile and writes a plausible email that will get through the filters. And it’s like suddenly, “Oh, that’s a new problem. That’s going to be interesting.”
Lex Fridman
(02:53:55)
Just on the Wikipedia editing side, does it make the job of the volunteer of the editor more difficult in a world where larger and larger percentage of the internet is written by an LLM?
Jimmy Wales
(02:54:08)
One of my predictions, and we’ll see, ask me again in five years how this panned out, is that in a way, this will strengthen the value and importance of some traditional brands. So, if I see a news story and it’s from the Wall Street Journal, from the New York Times, from Fox News, I know what I’m getting and I trust it to whatever extent I might have, trust or distrust in any of those.

(02:54:43)
And if I see a brand new website that looks plausible, but I’ve never heard of it, and it could be machine generated content that may be full of errors, I think I’ll be more cautious. I think I’m more interested. And we can also talk about this around photographic evidence. So, obviously, there will be scandals where major media organizations get fooled by a fake photo.

(02:55:04)
However, if I see a photo of the recent ones, the Pope wearing an expensive puffer jacket, I’m going to go, “Yeah, that’s amazing that a fake like that could be generated.” But my immediate thought is not, “Oh, so the Pope is dipping into the money, eh? Partly because this particular Pope doesn’t seem like he’d be the type.”
Lex Fridman
(02:55:25)
My favorite is extensive pictures of Joe Biden and Donald Trump hanging out and having fun together.
Jimmy Wales
(02:55:31)
Yeah. Brilliant. So, I think people will care about the provenance of a photo. And if you show me a photo and you say, “Yeah, this photo is from Fox News,” even though I don’t necessarily think that’s the highest, but I’m like, “Wow, it’s a news organization and they’re going to have journalism, and they’re going to make sure the photo is what it purports to be.”

(02:55:55)
That’s very different from a photo randomly circulating on Twitter. Whereas I would say, 15 years ago, a photo randomly circulating on Twitter, in most cases, the worst you could do, and this did happen, is misrepresent the battlefield. So, like, “Oh, here’s a bunch of injured children. Look what Israel has done.” But actually, it wasn’t Israel, it was another case 10 years ago. That has happened, that has always been around. But now, we can have much more specifically constructed, plausible looking photos that if I just see them circulating on Twitter, I’m going to go, “I just don’t know. Not sure. I can make that in five minutes.”
Lex Fridman
(02:56:32)
Well, I also hope that it’s kind of like what you’re writing about in your book that we could also have citizen journalists that have a stable, verifiable trust that builds up. So, it doesn’t have to be in New York Times with this organization that you could be in an organization of one as long as it’s stable and carries through time and it builds up or it goes up.
Jimmy Wales
(02:56:52)
No, I agree. But the one thing I’ve said in the past, and this depends on who that person is and what they’re doing, but it’s like I think my credibility, my general credibility in the world should be the equal of a New York Times reporter. So, if something happens, and I witness it, and I write about it, people are going to go, “Well, Jimmy Wales said it. That’s just like if a New York Times reporter said it. I’m going to tend to think he didn’t just make it up.”

(02:57:18)
The truth is nothing interesting ever happens around me. I don’t go to war zones. I don’t go to big press conferences. I don’t interview Putin and Zelenskyy. To an extent, yes. Whereas I do think for other people, those traditional models of credibility are really, really important. And then there is this sort of citizen journalism. I don’t know if you think of what you do as journalism. I kind of think it is, but you do interviews, you do long form interviews.

(02:57:49)
If you come and you say, “Here’s my tape,” but you wouldn’t hand out a tape. I just gesture to you as if I’m handing you a cassette tape. But if you put it into your podcast, ” Here’s my interview with Zelenskyy.” And people aren’t going to go, “Yeah, how do we know? That could be a deep fake. You could have faked that.” Because people are like, “Well, no, you’re a well known podcaster and you do interview interesting people. Yeah, you wouldn’t think that.” So, that your brand becomes really important.

(02:58:19)
Whereas if suddenly, and I’ve seen this already, I’ve seen sort of video with subtitles in English, and apparently the Ukrainian was the same and it was Zelenskyy saying something really outrageous. And I’m like, “Yeah, I don’t believe that. I don’t think he said that in a meeting with whatever. I think that’s Russian propaganda or probably just trolls.”
Lex Fridman
(02:58:42)
Yeah. And then building platforms and mechanisms of how that trust can be verified. If something appears on a Wikipedia page, that means something. If something appears on my Twitter account, that means something. That means I, this particular human, have signed off on it.
Jimmy Wales
(02:58:58)
Yeah, exactly.
Lex Fridman
(02:58:58)
And then the trust you have in this particular human transfers to the piece of content. Hopefully, there’s millions of people with different metrics of trust. And then you could see that there’s a certain kind of bias in the set of conversations you’re having. So, maybe okay, I trust this person, I have this kind of bias and I’ll go to this other person with this other kind of bias and I can integrate them in this kind of way. Just like you said with Fox News and whatever [inaudible 02:59:24].
Jimmy Wales
(02:59:23)
Yeah. Wall Street Journal, New York Times, they’ve all got where they sit. Yeah.

Advice for young people

Lex Fridman
(02:59:29)
So, you have built, I would say, one of if not the most impactful website in the history of human civilization. So, let me ask for you to give advice to young people how to have impact in this world. High schoolers, college students wanting to have a big positive impact on the world.
Jimmy Wales
(02:59:50)
Yeah, great. If you want to be successful, do something you’re really passionate about rather than some kind of cold calculation of what can make you the most money. Because if you go and try to do something and you’re like, “I’m not that interested, but I’m going to make a lot of money doing it,” you’re probably not going to be that good at it. And so, that is a big piece of it.

(03:00:12)
For startups, I give this advice. And this is a career startup, any kind of young person just starting out is be persistent. There will be moments when it’s not working out and you can’t just give up too easily. You’ve got to persist through some hard times. Maybe two servers crash on a Sunday, and you’ve got to scramble to figure it out, but persist through that, and then also be prepared to pivot. That’s a newer word, new for me. But when I pivoted from Nupedia to Wikipedia it’s like, “This isn’t working. I’ve got to completely change.” So, be willing to completely change direction when something is not working.

(03:00:54)
Now, the problem with these two wonderful pieces of advice is, which situation am I in today? Is this a moment when I need to just power through and persist because I’m going to find a way to make this work? Or is this a moment where I need to go, “Actually, this is totally not working and I need to change direction?” But also, I think for me, that always gives me a framework of like, “Okay, here’s the problem. Do we need to change direction, or do we need to power through it?” And just knowing those are the choices. Not always the only choices, but those choices.

(03:01:27)
I think it can be helpful to say, “Okay, am I chickening out because I’m having a little bump, and I’m feeling unemotional, and I’m just going to give up too soon?” Ask yourself that question. And also, it’s like, “Am I being pigheaded and trying to do something that actually doesn’t make sense?” Okay. Ask yourself that question too. Even though they’re contradictory questions, sometimes it will be one, sometimes it will be the other, and you got to really think it through.
Lex Fridman
(03:01:53)
I think persisting with the business model behind Wikipedia is such an inspiring story, because we live in a capitalist world. We live in a scary world, I think, for an internet business. And so, to do things differently than a lot of websites are doing, what Wikipedia has lived through this excessive explosion of many websites that are basically ad driven. Google is ad driven. Facebook, Twitter, all of these websites are ad driven. And to see them succeed, become these incredibly rich, powerful companies that if I could just have that money, you would think as somebody running Wikipedia, “I could do so much positive stuff.” And so, to persist through that is … I think from my perspective now, Monday night quarterback or whatever was the right decision, but boy is that a tough decision.
Jimmy Wales
(03:02:56)
It seemed easy at the time.
Lex Fridman
(03:02:58)
And then you just kind of stay with it. Stick with it.
Jimmy Wales
(03:03:00)
Yeah, just stay with it. It’s working.
Lex Fridman
(03:03:01)
So now, when you chose persistent.
Jimmy Wales
(03:03:06)
Yeah. I always like to give an example of MySpace, because I just think it’s an amusing story. MySpace was poised, I would say, to be Facebook. It was huge. It was viral, it was lots of things. Kind of foreshadowed a bit of maybe even TikTok because it was a lot of entertainment, content, casual. And then Rupert Murdoch bought it and it collapsed within a few years. And part of that I think was because they were really, really heavy on ads and less heavy on the customer experience.

(03:03:40)
So, I remember, to accept a friend request was like three clicks where you saw three ads. And on Facebook, you accept the friend request, you didn’t even leave the page, like that’s just accepted. So, I used to give this example of like, “Yeah, well, Rupert Murdoch really screwed that one up.” And in a sense, maybe he did, but somebody said, “You know what, actually, he bought it for …” And I don’t remember the numbers he bought it for, 800 million, and it was very profitable through its decline. He actually made his money back and more. From a financial point of view, it was a bad investment in the sense of you could have been Facebook. But on more mundane metrics, it’s like, “Actually, it worked out for him.”
Lex Fridman
(03:04:18)
It all matters how you define success.
Jimmy Wales
(03:04:20)
It does. That is also advice to young people. One of the things I would say when we have our mental models of success as an entrepreneur, for example, and your examples in your mind are Bill Gates, Mark Zuckerberg. So, people who at a very young age had one really great idea that just went straight to the moon and it became one of the richest people in the world. That is really unusual, like really, really rare.

(03:04:52)
And for most entrepreneurs, that is not a life path you’re going to take. You’re going to fail, you’re going to reboot, you’re going to learn from what you failed at. You’re going to try something different. And that is really important, because if your standard of success is, “Well, I feel sad because I’m not as rich as Elon Musk.” It’s like, “Well, so should almost everyone, possibly everyone except Elon Musk is not as rich as Elon Musk.”

(03:05:17)
Realistically, you can set a standard of success. Even in a really narrow sense, which I don’t recommend of thinking about your financial success. It’s like if you measure your financial success by thinking about billionaires, that’s heavy. That’s probably not good. I don’t recommend it.

(03:05:40)
Personally, for me, when journalists say, “Oh, how does it feel to not be a billionaire?” I usually say, “I don’t know how does it feel to you.” Because they’re not. But also, I live in London. The number of bankers that no one has ever heard of who live in London, who make far more money than I ever will is quite a large number, and I wouldn’t trade my life for theirs at all, because mine is so interesting.

(03:06:07)
“Oh, right, Jimmy, we need you to go and meet the Chinese propaganda minister.” “Oh, okay. That’s super interesting.” Like, “Yeah, Jimmy, here’s the situation. You can go to this country. And why you’re there, the President has asked to see you.” It’s like, “God, that’s super interesting.” “Jimmy, you’re going to this place and there’s a local Wikipedia who said, ‘Do you want to stay with me and my family?'” And I’m like, “Yeah, that’s really cool. I would like to do that. That’s really interesting.” I don’t do that all the time, but I’ve done it and it’s great. So, for me, that’s arranging your life so that you have interesting experiences. It’s just great.

Meaning of life

Lex Fridman
(03:06:50)
This is more to the question of what Wikipedia looks like in 1,000 years. What do you think is the meaning of this whole thing? Why are we here, human civilization? What’s the meaning of life?
Jimmy Wales
(03:07:00)
Yeah. I don’t think there is external answer to that question.
Lex Fridman
(03:07:05)
And I should mention that there’s a very good Wikipedia page on the different philosophies in the meaning of life.
Jimmy Wales
(03:07:11)
Oh, interesting. I have to read that and see what I think. Hopefully, it’s actually neutral and gives a wide range …
Lex Fridman
(03:07:16)
Oh, it’s a really good reference to a lot of different philosophies about meaning. The 20th century philosophy in general, from Nietzsche to the existentialist, to Simone de Beauvoir, all of them have an idea of meaning. They really struggle with it systematically, rigorously, and that’s what the page … And obviously, a shout-out to the Hitchhiker’s Guide and all that kind of stuff.
Jimmy Wales
(03:07:37)
Yeah. I think there’s no external answer to that. I think it’s internal. I think we decide what meaning we will have in our lives and what we’re going to do with ourselves. If we’re talking about 1,000 years, millions of years, Yuri Milner wrote a book. He’s a big internet investor guy. He wrote a book advocating quite strongly for humans exploring the universe, and getting off the planet. And he funds projects to using lasers to send little cameras, and interesting stuff. And he talks a lot in the book about meaning. His view is that the purpose of the human species is to broadly survive and get off the planet.

(03:08:31)
Well, I don’t agree with everything he has to say, because I think that’s not a meaning that can motivate most people in their own lives. It’s like, “Okay, great.” The distances of space are absolutely enormous, so I don’t know. Shall we build generation ships to start flying places? I can’t do that. Even if I’m Elon Musk and I could devote all my wealth to build, I’ll be dead on the ship on the way. So, is that really a meaning?

(03:08:57)
But I think it’s really interesting to think about. And reading his little book, it’s quite a short little book. Reading his book, it did make me think about, “Wow, this is big. This is not what you think about in your day-to-day life. Where is the human species going to be in 10 million years?” And it does make you sort of turn back to Earth and say, “Gee, let’s not destroy the planet. We’re stuck here for at least a while, and therefore we should really think about sustainability.” I mean, one million year sustainability.

(03:09:37)
And we don’t have all the answers. We have nothing close to the answers. I’m actually excited about AI in this regard, while also bracketing, yeah, I understand there’s also risks and people are terrified of AI. But I actually think it is quite interesting this moment in time that we may have in the next 50 years to really, really solve some really long-term human problems, for example, in health. The progress that’s being made in cancer treatment, because we are able to at scale model molecules, and genetics, and things like this, it gets huge. It’s really exciting. So, if we can hang on for a little while, and certain problems that seem completely intractable today, like climate change may end up being actually not that hard.
Lex Fridman
(03:10:30)
And we just might be able to alleviate the full diversity of human suffering.
Jimmy Wales
(03:10:35)
For sure. Yeah.
Lex Fridman
(03:10:37)
In so doing, help increase the chance that we can propagate the flame of human consciousness out towards the stars. And I think another important one, if we fail to do that. For me, it’s propagating, maintaining the full diversity, and richness, and complexity, and expansiveness of human knowledge. So, if we destroy ourselves, it would make me feel a little bit okay if the human knowledge persists.
Jimmy Wales
(03:11:09)
It just triggered me to say something really interesting, which is when we talked earlier about translating and using machines to translate, we mostly talked about small languages and translating into English, but I always like to tell this story of something inconsequential, really.

(03:11:28)
I was in Norway, in Bergen, Norway, where every year they’ve got this annual festival called [foreign language 03:11:33], which is young groups drumming, and they have a drumming competition. It’s the 17 sectors of the city, and they’ve been doing it for a couple hundred years or whatever. They wrote about it in the three languages of Norway. And then from there, it was translated into English, into German, et cetera, et cetera.

(03:11:53)
And so, what I love about that story is what it reminds me is this machine translation goes both ways. And when you talk about the richness and broadness of human culture, we’re already seeing some really great pieces of this. So, like Korean soap operas, really popular, not with me, but with people.

(03:12:17)
Imagine taking a very famous, very popular, very well known Korean drama. I literally mean now, we’re just about there technologically where we use a machine to redub it in English in an automated way, including digitally editing the faces so it doesn’t look dubbed. And so, suddenly you say, “Oh, wow, here’s a piece of …” It’s the Korean equivalent of maybe it’s Friends as a comedy, or maybe it’s Succession, just to be very contemporary. It’s something that really impacted a lot of people, and they really loved it, and we have literally no idea what it’s about. And suddenly, it’s like, “Wow.” Music, street music from wherever in the world can suddenly become accessible to us all in new ways. It’s so cool.
Lex Fridman
(03:13:09)
It’s really exciting to get access to the richness of culture in China, in the many different subcultures of Africa, South America.
Jimmy Wales
(03:13:19)
One of my unsuccessful arguments with the Chinese government is by blocking Wikipedia, you aren’t just stopping people in China from reading Chinese Wikipedia and other language versions of Wikipedia, you’re also preventing the Chinese people from telling their story. So, is there a small festival in a small town in China like [foreign language 03:13:41]? I don’t know. But by the way, the people who live in that village, that small town of 50,000, they can’t put that in Wikipedia and get it translated into other places. They can’t share their culture and their knowledge.

(03:13:54)
And I think for China, this should be a somewhat influential argument, because China does feel misunderstood in the world. And it’s like, “Okay, well, there’s one way. If you want to help people understand, put it in Wikipedia. That’s what people go to when they want to understand.”
Lex Fridman
(03:14:08)
And give the amazing, incredible people of China a voice.
Jimmy Wales
(03:14:13)
Exactly.
Lex Fridman
(03:14:14)
Jimmy, I thank you so much. I’m such a huge fan of everything you’ve done.
Jimmy Wales
(03:14:18)
Oh, thank you. That’s really great.
Lex Fridman
(03:14:18)
I keep saying Wikipedia. I’m deeply, deeply, deeply, deeply grateful for Wikipedia. I love it. It brings me joy. I donate all the time. You should donate too. It’s a huge honor to finally talk with you, and this is just amazing. Thank you so much for today.
Jimmy Wales
(03:14:31)
Thanks for having me.
Lex Fridman
(03:14:33)
Thanks for listening to this conversation with Jimmy Wales. To support this podcast, please check out our sponsors in the description. And now, let me leave you with some words from the world historian, Daniel Boorstin. The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge. Thank you for listening, and hope to see you next time.

Test

Jimmy Wales
00:00:00
We’ve never bowed down to government pressure anywhere in the world, and we never will. We understand that we’re hardcore, and actually, there is a bit of nuance about how different companies respond to this, but our response has always been just to say no. If they threaten to block, well, knock yourself out. You’re going to lose Wikipedia.
Lex Fridman
00:00:21
The following is a conversation with Jimmy Wales, co-founder of Wikipedia, one of, if not the most impactful websites ever, expanding the collective knowledge, intelligence, and wisdom of human civilization. This is Lex Friedman podcast. To support it, please check out our sponsors in the description. Now, dear friends, here’s Jimmy Wales.
Lex Fridman
00:00:47
Let’s start at the beginning. What is the origin story of Wikipedia?
Jimmy Wales
00:00:51
The origin story of Wikipedia, well, so I was watching the growth of the free software movement, open-source software, and seeing programmers coming together to collaborate in new ways, sharing code, doing that under free license, which is really interesting because it empowers an ability to work together. That’s really hard to do if the code is still proprietary, because then if I chip in and help, we have to figure out how I’m going to be rewarded and what that is. But the idea that everyone can copy it and it just is part of the commons really empowered a huge wave of creative software production. I realized that that kind of collaboration could extend beyond just software to all kinds of cultural works.

00:01:38
The first thing that I thought of was an encyclopedia and thought, “Oh, that seems obvious that an encyclopedia, you can collaborate on it.” There’s a few reasons why. One, we all pretty much know what an encyclopedia entry on say, the Eiffel Tower should be like. You should see a picture, a few pictures, maybe, history, location, something about the architect, et cetera, et cetera. So we have a shared understanding of what it is we’re trying to do, and then we can collaborate and different people can chip in and find sources and so on and so forth. So set up first Nupedia, which was about two years before Wikipedia.

00:02:18
With Nupedia, we had this idea that in order to be respected, we had to be even more academic than a traditional encyclopedia because a bunch of volunteers on the internet getting it right out of an encyclopedia, you could be made fun of if it’s just every random person. So we had implemented this seven-stage review process to get anything published, and two things came of that. So one thing, one of the earliest entries that we published after this rigorous process, a few days later, we had to pull it because as soon as it hit the web and the broader community took a look at it, people noticed plagiarism and realized that it wasn’t actually that good, even though it had been reviewed by academics and so on. So we had to pull it. So it’s like, “Oh, okay. Well, so much for a seven-stage review process.”

00:03:07
I was frustrated, “Why is this taking so long? Why is it so hard?” So I thought, “Okay.” I saw that Robert Merton had won a Nobel Prize in economics for his work on option pricing theory. When I was in academia, that’s what I worked on was option pricing theory, had a published paper. So I’d worked through all of his academic papers, and I knew his work quite well. I thought, “Oh, I’ll write a short biography of Merton.” When I started to do it, I’d been out of academia, I hadn’t been a grad student for a few years then. I felt this huge intimidation because they were going to take my draft and send it to the most prestigious finance professors that we could find to give me feedback for revisions. It felt like being back in grad school. It’s like this really oppressive, like, you’re going to submit it for a review and you’re going to get critiques.
Lex Fridman
00:03:59
A little bit of the bad part of grad school.
Jimmy Wales
00:04:01
Yeah, yeah, the bad part of grad school. So I was like, “Oh, this isn’t intellectually fun, this is like the bad part of grad school. It’s intimidating, and there’s a lot of potential embarrassment if I screw something up and so forth.” So that was when I realized, “Okay, look, this is never going to work. This is not something that people are really going to want to do.” So Jeremy Rosenfeld, one of my employees had brought and showed me the Wiki concept in December, and then Larry Sanger brought in the same, said, “What about this Wiki idea?” So in January, we decided to launch Wikipedia, but we weren’t sure. So the original project was called Nupedia. Even though it wasn’t successful, we did have quite a group of academics and really serious people.

00:04:45
We were concerned that, “Oh, maybe these academics are going to really hate this idea, and we shouldn’t just convert the project immediately. We should launch this as a side project, the idea of here’s a Wiki where we can start playing around.” But actually, we got more work done in two weeks than we had in almost two years because people were able to just jump on and start doing stuff, and it was actually a very exciting time. Back then, you could be the first person who typed Africa as a continent and hit Save, which isn’t much of an encyclopedia entry, but it’s true, and it’s a start and it’s kind of fun, like put your name down.

00:05:20
Actually, a funny story was several years later, I just happened to be online and I saw when, I think his name is Robert Aumann, won the Nobel Prize in economics. We didn’t have an entry on him at all, which was surprising, but it wasn’t that surprising. This was still early days. So I got to be the first person to type Robert Aumann, won Nobel Prize in economics and hit Save, which again, wasn’t a very good article. But then I came back two days later and people had improved it and so forth. So that second half of the experience where with Robert Merton, I never succeeded because it was just too intimidating. It was like, “Oh, no, I was able to chip in and help, other people jumped in. Everybody was interested in the topic, because it’s all in the news at the moment.” So it’s just a completely different model, which worked much, much better.
Lex Fridman
00:06:03
Well, what is it that made that so accessible, so fun, so natural to just add something?
Jimmy Wales
00:06:09
Well, I think, especially in the early days, and this, by the way, has gotten much harder because there are fewer topics that are just greenfield available. But you could say, “Oh, well, I know a little bit about this, and I can get it started.” But then it is fun to come back then and see other people have added and improved and so on and so forth. That idea of collaborating where people can, much like open-source software, you put your code out and then people suggest revisions. They change it, and it modifies and it grows beyond the original creator, it’s just a fun, wonderful, quite geeky hobby, but people enjoy it.
Lex Fridman
00:06:51
How much debate was there over the interface, over the details of how to make that, seamless and frictionless?
Jimmy Wales
00:06:57
Yeah, not as much as there probably should have been, in a way. During that two years of the failure of Nupedia where very little work got done, what was actually productive was, there was a huge long discussion; email discussion, very clever people talking about things like neutrality, talking about what is an encyclopedia, but also talking about more technical ideas. Back then, XML was all the rage and thinking about shouldn’t you have certain data that might be in multiple articles that gets updated automatically? So for example, the population of New York City, every 10 years there’s a new official census, couldn’t you just update that bit of data in one place and it would update across all languages? That is a reality today. But back then it was just like, “Hmm, how do we do that? How do we think about that?”
Lex Fridman
00:07:47
So that is a reality today where it’s-
Jimmy Wales
00:07:48
Yeah-
Lex Fridman
00:07:49
… there’s some-
Jimmy Wales
00:07:50
Yeah, so Wikidata-
Lex Fridman
00:07:50
… universal variables? Wikidata.
Jimmy Wales
00:07:56
Yeah, Wikidata. From a Wikipedia entry, you can link to that piece of data in Wikidata, and it’s a pretty advanced thing, but there are advanced users who are doing that. Then when that gets updated, it updates in all the languages where you’ve done that.
Lex Fridman
00:08:07
That’s really interesting. There was this chain of emails in the early days of discussing the details of what is. So there’s the interface, there’s the-
Jimmy Wales
00:08:14
Yeah, so the interface, so an example, there was some software called UseModWiki, which we started with. It’s quite amusing actually, because the main reason we launched with UseModWiki is that it was a single Perl script, so it was really easy for me to install it on the server and just get running. But it was some guy’s hobby project, it was cool, but it was just a hobby project. All the data was stored in flat text files, so there was no real database behind it. So to search the site, you basically used Grab, which is just the basic Unix utility to look through all the files. So that clearly was never going to scale. But also in the early days, it didn’t have real logins. So you could set your username, but there were no passwords. So I might say Bob Smith, and then someone else comes along and says, “No, I’m Bob Smith,” and they both had it. Now that never really happened.

00:09:10
We didn’t have a problem with it, but it was obvious, you can’t grow a big website where everybody can pretend to be everybody. That’s not going to be good for trust and reputation and so forth. So quickly, I had to write a little login, store people’s passwords and things like that so you could have unique identities. Then another example of something you would’ve never thought would’ve been a good idea, and it turned out to not be a problem. But to make a link in Wikipedia in the early days, you would make a link to a page that may or may not exist by just using CamelCase, meaning it’s like upper case, lowercase, and you smash the words together. So maybe New York City, you might type N-E-W, no space, capital Y, York City, and that would make a link, but that was ugly. That was clearly not right. So I was like, “Okay, well that’s just not going to look nice. Let’s just use square brackets, two square brackets makes a link.”

00:10:04
That may have been an option in the software. I’m not sure I thought up square brackets. But anyway, we just did that, which worked really well. It makes nice links and you can see in its red links or blue links, depending on if the page exists or not. But the thing that didn’t occur to me even to think about is that, for example, on the German language standard keyboard, there is no square bracket. So for German Wikipedia to succeed, people had to learn to do some alt codes to get the square bracket, or a lot of users cut and paste a square bracket where they could find one and they would just cut and paste one in. Yet. German Wikipedia has been a massive success, so somehow that didn’t slow people down.
Lex Fridman
00:10:40
How is that that the German keyboards don’t have a square bracket. How do you do programming? How do you live life to its fullest without square brackets?
Jimmy Wales
00:10:48
It’s a very good question. I’m not really sure. Maybe it does now because keyboard standards have drifted over time and becomes useful to have a certain character. It’s same thing, there’s not really a W character in Italian, and it wasn’t on keyboards or I think it is now. But in general, W is not a letter in Italian language, but it appears in enough international words that it’s crept into Italian.
Lex Fridman
00:11:12
All of these things are probably Wikipedia articles in themselves.
Jimmy Wales
00:11:17
Oh, yes. Oh, yeah.
Lex Fridman
00:11:17
The discussion of square brackets-
Jimmy Wales
00:11:17
That is a whole-
Lex Fridman
00:11:17
… in German-
Jimmy Wales
00:11:19
… whole discussion, I’m sure.
Lex Fridman
00:11:20
… on both the English and the German Wikipedia. The difference between those two might be very-
Jimmy Wales
00:11:27
Interesting.
Lex Fridman
00:11:27
… interesting. So Wikidata is fascinating, but even the broader discussion of what is an encyclopedia, can you go to that philosophical question of-
Jimmy Wales
00:11:37
Sure.
Lex Fridman
00:11:37
… what is an encyclopedia?
Jimmy Wales
00:11:39
What is an encyclopedia? So the way I would put it is an encyclopedia, or what our goal is is the sum of all human knowledge, but sum meaning summary. This was an early debate. Somebody started uploading the full text of Hamlet, for example, and we said, “Mmm, wait, hold on a second. That’s not an encyclopedia article, but why not?” So hence was born Wikisource, which is where you put original texts and things like that, out of copyright text, because they said, “No, an encyclopedia article about Hamlet, that’s a perfectly valid thing. But the actual text of the play is not an encyclopedia article. “So most of it’s fairly obvious, but there are some interesting quirks and differences. So for example, as I understand it, in French language encyclopedias, traditionally it would be quite common to have recipes, which in English language that would be unusual. You wouldn’t find a recipe for chocolate cake in Britannica. So I actually don’t know the current state, haven’t thought about that in many, many years now.
Lex Fridman
00:12:44
State of cake recipes in Wikipedia, in English, Wikipedia?
Jimmy Wales
00:12:47
I wouldn’t say there’s chocolate cake recipes. You might find a sample recipe somewhere. I’m not saying there are none, but in general, no, we wouldn’t have recipes-
Lex Fridman
00:12:55
I told myself I would not get outraged in this conversation, but now I’m outraged. I’m deeply upset.
Jimmy Wales
00:13:00
It’s actually very complicated. I love to cook. I’m actually quite a good cook. What’s interesting is it’s very hard to have a neutral recipe because [inaudible 00:13:12]
Lex Fridman
00:13:12
Like a canonical recipe for cake-
Jimmy Wales
00:13:13
A canonical recipe is-
Lex Fridman
00:13:14
… chocolate cake.
Jimmy Wales
00:13:15
… is kind of difficult to come by because there’s so many variants and it’s all debatable and interesting. For something like chocolate cake, you could probably say, “Here’s one of the earliest recipes,” or, “Here’s one of the most common recipes.” But for many, many things, the variants are as interesting as somebody said to me recently, 10 Spaniards, 12 paella recipes. So these are all matters of open discussion.
Lex Fridman
00:13:44
Well, just to throw some numbers, as of May 27, 2023, there are 6.6 million articles in the English Wikipedia containing over 4.3 billion words. Including articles, the total number of pages is 58 million.
Jimmy Wales
00:14:05
Yeah.
Lex Fridman
00:14:06
Does that blow your mind?
Jimmy Wales
00:14:08
Yes, it does. It doesn’t, because I know those numbers and see them from time to time. But in another sense, a deeper sense, yeah, it does. It’s really remarkable. I remember when English Wikipedia passed 100,000 articles and when German Wikipedia passed 100,000, ’cause I happened to be in Germany with a bunch of Wikipedians that night, and then it seemed quite big. We knew at that time that it was nowhere near complete. I remember at Wikimania in Harvard when we did our annual conference there in Boston, someone who had come to the conference from Poland had brought along with him a small encyclopedia, a single volume encyclopedia of biographies, so short biographies, normally a paragraph or so about famous people in Poland, and there were some 22,000 entries. He pointed out that even then, 2006, Wikipedia felt quite big.

00:15:12
He said in English Wikipedia, there’s only a handful of these, less than 10%, I think he said. So then you realized, yeah, actually, who was the mayor of Warsaw in 1873? Don’t know. Probably not in English Wikipedia, but it probably might be today, but there’s so much out there. Of course, what we get into when we’re talking about how many entries there are and how many could there be, is this very deep philosophical issue of notability, which is the question of, well, how do you draw the limit? How do you draw what is there? So sometimes people say, “Oh, there should be no limit.” But I think that doesn’t stand up to much scrutiny if you really pause and think about it. So I see in your hand there you’ve got a BIC pen, pretty standard. Everybody’s seen billions of those in life.
Lex Fridman
00:16:05
Classic though.
Jimmy Wales
00:16:05
It’s a classic, clear, BIC pen. So could we have an entry about that BIC pen aisle? I bet we do, that type of BIC pen because it’s classic. Everybody knows it, and it’s got a history. Actually, there’s something interesting about the BIC company. They make pens, they also make kayaks, and there’s something else they’re famous for. Basically, they’re a definition by non-essentials company. Anything that’s long and plastic, that’s what they make.
Lex Fridman
00:16:33
Wow, that’s very-
Jimmy Wales
00:16:34
If you want to find the common ground-
Lex Fridman
00:16:36
… platonic form, the platonic form of a BIC.
Jimmy Wales
00:16:37
But could we have an article about that very BIC pen in your hand, so Lex Friedman’s BIC pen as of this week?
Lex Fridman
00:16:45
Oh, the very, this instance-
Jimmy Wales
00:16:45
The very specific instance, and the answer is no, there’s not much known about it. I dare say, unless it’s very special to you and your great-grandmother gave it to you or something, you probably know very little about it. It’s a pen. It’s just here in the office. So that’s just to show there is a limit. In German Wikipedia, they used to talk about the rear nut of the wheel of [inaudible 00:17:10] bicycle [inaudible 00:17:11] a well-known Wikipedian of the time, to sort of illustrate, you can’t have an article about literally everything. So then it raises the question, what can you have an article about? What can’t you? That can vary depending on the subject matter. One of the areas where we try to be much more careful would be biographies. The reason is a biography of a living person, if you get it wrong, you can actually be quite hurtful, quite damaging.

00:17:38
So if someone is a private person and somebody tries to create a Wikipedia entry, there’s no way to update it. There’s not much done. So for example, an encyclopedia article about my mother, my mother, school teacher later, a pharmacist, wonderful woman, but never been in the news, other than me talking about why there shouldn’t be a Wikipedia entry, that’s probably made it in somewhere, standard example. But there’s not enough known. You could imagine a database of genealogy having date of birth, date of death, certain elements like that of private people. But you couldn’t really write a biography. One of the areas this comes up quite often is what we call BLP1E. We’ve got lots of acronyms. Biography of a living person who’s notable for only one event is a real danger zone.
Lex Fridman
00:18:27
Oh.
Jimmy Wales
00:18:28
The type of example would be a victim of a crime, so someone who’s a victim of a famous serial killer, but about whom really not much is known. They weren’t a public person, they’re just a victim of a crime, we really shouldn’t have an article about that person. They’ll be mentioned, of course, and maybe this specific crime might have an article. But for that person, no, not really. That’s not really something that makes any sense because how can you write a biography about someone you don’t know much about? It varies from field to field. So for example, for many academics, we will have an entry that we might not have in a different context because for an academic, it’s important to have sort of their career, what papers they’ve published, things like that.

00:19:13
You may not know anything about their personal life, but that’s actually not encyclopedically relevant in the same way that it is for member of a royal family where it’s basically all about the family. So we’re fairly nuanced about notability and where it comes in. I’ve always thought that the term notability, I think, is a little problematic. We struggled about how to talk about it. The problem with notability is it can feel insulting. Say, “Oh no, you’re not noteworthy.” Well, my mother’s noteworthy. She’s a really important person in my life, so that’s not right. But it’s more like verifiability. Is there a way to get information that actually makes an encyclopedia entry?
Lex Fridman
00:19:56
It so happens that there’s a Wikipedia page about me as I’ve learned recently, and the first thought I had when I saw that was, “Surely I am not notable enough.” So I was very surprised and grateful that such your page could exist and actually, just allow me to say thank you to all the incredible people that are part of creating and maintaining Wikipedia. It’s my favorite website on the internet. The collection of articles that Wikipedia has created is just incredible. We’ll talk about the various details of that. But the love and care that goes into creating pages for individuals, for a BIC pen, for all this kind of stuff is just really incredible.

00:20:43
So I just felt the love when I saw that page. But I also felt just ’cause I do this podcast and I just through this podcast, gotten to know a few individuals that are quite controversial, I’ve gotten to be on the receiving end of something quite … to me as a person who loves other human beings, I’ve gone to be at the receiving end of some attacks through the Wikipedia form. Like you said, when you look at living individuals, it can be quite hurtful, the little details of information. Because I’ve become friends with Elon Musk and I’ve interviewed him, but I’ve also interviewed people on the left, far left, people on the right, some people would say far right, and so now you take a step, you put your toe into the cold pool of politics and the shark emerges from the depths and pulls you right in.
Jimmy Wales
00:21:41
Yeah, the boiling hot pool of politics.
Lex Fridman
00:21:43
I guess it’s hot, and so I got to experience some of that. I think what you also realize is there has to be, for Wikipedia credible sources, verifiable sources, and there’s a dance there because some of the sources are pieces of journalism. Of course, journalism operates under its own complicated incentives such that people can write articles that are not factual or are cherry-picking all the flaws they can have in a journalistic article-
Jimmy Wales
00:22:18
For sure.
Lex Fridman
00:22:18
… and those can be used as-
Jimmy Wales
00:22:20
For sure.
Lex Fridman
00:22:21
… as sources. It’s like they dance hand-in-hand. So for me, sadly enough, there was a really concerted attack to say that I was never at MIT, never did anything at MIT. Just to clarify, I am a research scientist at MIT. I have been there since 2015. I’m there today. I’m at a prestigious, amazing laboratory called LIDS, and I hope to be there for a long time. I work on AI, robotics, machine learning. There’s a lot of incredible people there. By the way, MIT has been very kind to defend me. Unlike Wikipedia says, it is not an unpaid position. There was no controversy.
Jimmy Wales
00:23:03
Right.
Lex Fridman
00:23:03
It was all very calm and happy and almost boring research that I’ve been doing there. The other thing, because I am half-Ukrainian, half-Russian-
Jimmy Wales
00:23:14
Oh.
Lex Fridman
00:23:15
… and I’ve traveled to Ukraine and I will travel to Ukraine again, and I will travel to Russia for some very difficult conversations. My heart’s been broken by this war. I have family in both places. It’s been a really difficult time. But the little battle about the biography there also starts becoming important for the first time for me. I also want to clarify personally, I use this opportunity of some inaccuracies there. My father was not born in Chkalovsk, Russia. He was born in Kiev, Ukraine. I was born in Chkalovsk which is a town not in Russia. There is a town called that in Russia. But there’s another town in Tajikistan, which is the former republic of the Soviet Union. That town is now called B-U-S-T-O-N, Buston, which is funny ’cause we’re now in Austin, and I also am in Boston, it seems like my whole life is surrounded by these kinds of towns.

00:24:13
So I was born Tajikistan, and the rest of the biography is interesting, but my family is very evenly distributed between their origins and where they grew up between Ukraine and Russia, which is adds a whole beautiful complexity to this whole thing. So I want to just correct that. It’s like the fascinating thing about Wikipedia is in some sense, those little details don’t matter. But in another sense, what I felt when I saw a Wikipedia page about me or anybody I know is there’s this beautiful saving that this person existed, like a community that notices you that says, “Huh.” You see a butterfly that floats, and you’re like, “Huh?” That it’s not just any butterfly, it’s that one. “I like that one,” or you see a puppy or something, or it’s this BIC pen. “I remember this one, it has this scratch. You get noticed in that way and I know it’s a beautiful thing. Maybe it’s very silly of me and naive, but I feel like Wikipedia, in terms of individuals, is an opportunity to celebrate people, to celebrate ideas-
Jimmy Wales
00:25:26
For sure. For sure.
Lex Fridman
00:25:26
… and not a battleground of the kind of stuff we might see on Twitter, like the mockery, the derision, this kind of stuff.
Jimmy Wales
00:25:35
For sure.
Lex Fridman
00:25:36
Of course, you don’t want to cherry-pick. All of us have flaws and so on, but it just feels like to highlight a controversy of some sort, when that doesn’t at all represent the entirety of the human, in most cases, is sad.
Jimmy Wales
00:25:50
Yeah. Yeah. Yeah. So there’s a few to unpack and all that. So first, one of the things I find really, always find very interesting is your status with MIT. Okay, that’s upsetting, and it’s an argument and can be sorted out. But then what’s interesting is you gave as much time to that, which is actually important and relevant to your career and so on to also where your father was born, which most people would hardly notice, but is really meaningful to you. I find that a lot when I talk to people who have a biography in Wikipedia is they’re often as annoyed by a tinier that no one’s going to notice like this town in Tajikistan’s got a new name and so on. Nobody even knows what that means or whatever, but it can be super important. So that’s one of the reasons for biographies, we say human dignity really matters. So some of the things have to do with, and this is a common debate that goes on in Wikipedia, is what we call undue weight. So I’ll give an example.

00:26:59
There was a article I stumbled across many years ago about the mayor, or no, he wasn’t a mayor, he was a city council member of, I think it was Peoria, Illinois, but some small town in the Midwest. The entry, he’s been on the city council for 30 years or whatever. He’s frankly, a pretty boring guy and seems like a good local city politician. But in this very short biography, there was a whole paragraph, a long paragraph about his son being arrested for DUI, and it was clearly undue weight. It’s like, “What has this got to do with this guy if it even deserves a mention?” It wasn’t even clear had he done anything hypocritical, had he done himself anything wrong, even was his son, his son got a DUI.

00:27:44
That’s never great, but it happens to people, and it doesn’t seem like a massive scandal for your dad. So of course, I just took that out immediately. This is a long, long time ago. That’s the sort of thing where we have to really think about in a biography and about controversies to say, “Is this a real controversy?” So in general, one of the things we tend to say is any section, so if there’s a biography and there’s a section called controversies, that’s actually poor practice because it just invites people to say, “Oh, I want to work on this entry.” It’s either seven sections. “Oh, this one’s quite short. Can I add something?”
Lex Fridman
00:28:23
Right?
Jimmy Wales
00:28:24
Go out and find some more controversies. Now that’s nonsense, right?
Lex Fridman
00:28:24
Yeah.
Jimmy Wales
00:28:26
In general, putting it separate from everything else makes it seem worse, and also, it doesn’t put it in the right context. Whereas, if it’s a live flaw and there is a controversy, there’s always potential controversy for anyone, it should just be worked into the overall article, ’cause then it doesn’t become a temptation. You can contextualize appropriately and so forth. So that’s part of the whole process. But I think for me, one of the most important things is what I call community health. So yeah, are we going to get it wrong sometimes? Yeah, of course. We’re humans and doing good, quality reference material is hard. The real question is, how do people react to a criticism or a complaint or a concern? If the reaction is defensiveness or combativeness back, or if someone’s really in there being aggressive and in the wrong, like, “No, no, no, hold on, we’ve got to do this the right way.” You got to say, “Okay, hold on. Are there good sources? Is this contextualized appropriately? Is it even important enough to mention? What does it mean?”

00:29:40
Sometimes one of the areas where I do think there is a very complicated flaw, and you’ve alluded to it a little bit, but it’s like we know the media is deeply flawed. We know that journalism can go wrong. I would say particularly in the last whatever, 15 years, we’ve seen a real decimation of local media, local newspapers. We’ve seen a real rise in clickbait headlines and eager focus on anything that might be controversial. We’ve always had that with us, of course, there’s always been tabloid newspapers. But that makes it a little bit more challenging to say, “Okay, how do we sort things out when we have a pretty good sense that not every source is valid?” So as an example, a few years ago, it’s been quite a while now, we deprecated the MailOnline as a source and the MailOnline, the digital arm of the Daily Mail, it’s a tabloid.

00:30:46
It’s not fake news, but it does tend to run very hyped-up stories. They really love to attack people and go on the attack for political reasons and so on, and it just isn’t great. So by saying deprecated, and I think some people say, “Oh, you banned the Daily Mail? No, we didn’t ban it as a source. We just said, “Look, it’s probably not a great source. You should probably look for a better source.” So certainly if the Daily Mail runs a headline saying, “New Cure for Cancer,” it’s like probably there’s more serious sources than a tabloid newspaper. So in an article about lung cancer, you probably wouldn’t cite the Daily Mail. That’s kind of ridiculous. But also for celebrities and so forth to know, “Oh, well, they do cover celebrity gossip a lot, but they also tend to have vendettas and so forth.” You really have to step back and go, “Is this really encyclopedic or is this just the Daily Mail going on a rant?”
Lex Fridman
00:31:39
Some of that requires a great community health.
Jimmy Wales
00:31:41
It requires massive community health.
Lex Fridman
00:31:43
Even for me, for stuff I’ve seen that’s kind of, if actually iffy about people I know, things I know about myself, I still feel like a love for knowledge emanating from the article. I feel the community health, so I will take all slight inaccuracies. I love it because that means there’s people, for the most part, I feel of respect and love in this search for knowledge. Sometimes, ’cause I also love Stack Overflow and Stack Exchange for programming-related things. They can get a little cranky sometimes to a degree where it’s like it’s not as … you could could feel the dynamics of the health of the particular community and sub communities too, like a particular C Sharp or Java or Python or whatever, there’s little communities that emerge. You can feel the levels of toxicity, ’cause a little bit of strictness is good, but a little too much is bad because of the defensiveness, ’cause when somebody writes an answer and then somebody else says, “We’ll modify it,” and then get defensive, and there’s this tension that’s not conducive to improving towards a more truthful depiction of that topic.
Jimmy Wales
00:33:02
Yeah, a great example-
Lex Fridman
00:33:00
… truthful depiction of that topic.
Jimmy Wales
00:33:02
Yeah, a great example that I really loved this morning that I saw someone left a note on my user talk page in English Wikipedia saying it was quite a dramatic headline saying racist hook on front page. So we have on the front page of Wikipedia, we have little section called Did You know? And it’s just little tidbits and facts, just things people find interesting. And there’s a whole process for how things get there. And the one that somebody was raising a question about was, it was comparing a very well known US football player, Black. There was a quote from another famous sport person comparing him to a Lamborghini. Clearly a compliment. And so somebody said, “Actually, here’s a study, here’s some interesting information about how Black sports people are far more often compared to inanimate objects. And given that kind of analogy, and I think it’s demeaning to compare a person to a car, et cetera, cetera.”

00:34:01
But they said, “I’m not pulling, I’m not deleting it, I’m not removing it. I just want to raise the question.” And then there’s this really interesting conversation that goes on where I think the general consensus was, you know what, this isn’t like the alarming headline racist thing on the front page of Wikipedia, holy moly, that sounds bad. But it’s sort of like, actually yeah this probably isn’t the sort of analogy that we think is great. And so we should probably think about how to improve our language and not compare sports people to inanimate objects and particularly be aware of certain racial sensitivities that there might be around that sort of thing if there is a disparity in the media of how people are called.

00:34:40
And I just thought, you know what, nothing for me to weigh in on here. This is a good conversation. Like nobody’s saying people should be banned if they refer to, what was his name, The Fridge, Refrigerator Perry. Very famous comparison to an inanimate object of a Chicago Bears player, many years ago. But they’re just saying, hey, let’s be careful about analogies that we just pick up from the media. I said, “Yeah, that’s good.”
Lex Fridman
00:35:06
On the deprecation of news sources is really interesting because I think what you’re saying is ultimately you want to make a article by article decision, use your own judgment. And it’s such a subtle thing because there’s just a lot of hit pieces written about individuals like myself for example, that masquerade as an objective thorough exploration of a human being. It’s fascinating to watch because controversy and hit pieces just get more clicks.
Jimmy Wales
00:35:41
Oh yeah, sure.
Lex Fridman
00:35:41
This is, I guess, as a Wikipedia contributor, you start to deeply become aware of that and start to have a sense, a radar of clickbait versus truth to pick out the truth from the clickbaity type language.
Jimmy Wales
00:35:58
Oh, yeah. I mean it’s really important and we talk a lot about weasel words. And actually I’m sure we’ll end up talking about AI and ChatGPT.
Lex Fridman
00:36:10
Yes.
Jimmy Wales
00:36:10
But just to quickly mention in this area, I think one of the potentially powerful tool, because it is quite good at this, I’ve played around with and practiced it quite a lot, but ChatGPT-4 is really quite able to take a passage and point out potentially biased terms, to rewrite it to be more neutral. Now it is a bit anodyne and it’s a bit cliched, so sometimes it just takes the spirit out of something that’s actually not bad. It’s just like poetic language and you’re like, okay, that’s not actually helping. But in many cases I think that sort of thing is quite interesting. And I’m also interested in… Can you imagine where you feed in a Wikipedia entry and all the sources and you say, help me find anything in the article that is not accurately reflecting what’s in the sources? And that doesn’t have to be perfect. It only has to be good enough to be useful to community.

00:37:17
So if it scans-
Lex Fridman
00:37:19
Beautiful.
Jimmy Wales
00:37:19
… an article and all the sources and you say, oh, it came back with 10 suggestions and seven of them were decent and three of them it just didn’t understand, well actually that’s probably worth my time to do. And it can help us really more quickly get good people to review obscure entries and things like that.
Lex Fridman
00:37:41
So just as a small aside on that, and we’ll probably talk about language models a little bit, or a lot more, but one of the articles, one of the hit pieces about me, the journalist actually was very straightforward and honest about having used GPT to write part of the article.
Jimmy Wales
00:37:59
Interesting.
Lex Fridman
00:37:59
And then finding that it made an error and apologized for the error, that GPT-4 generated. Which has this kind of interesting loop, which is the articles are used to write Wikipedia pages, GPT is trained on Wikipedia, and there there’s like this interesting loop where the weasel words and the nuances can get lost or can propagate, even though they’re not grounded in reality. Somehow in the generation of the language model, new truths can be created and kind of linger.
Jimmy Wales
00:38:35
Yeah, there’s a famous web comic that’s titled cytogenesis, which is about how an errors in Wikipedia and there’s no source for it, but then a lazy journalist reads it and writes the source, and then some helpful Wikipedia spots that it has no source, finds a source and adds it to Wikipedia, and voila, magic. This happened to me once it, well, it nearly happened. There was this, it was really brief. I went back and researched it, I’m like, this is really odd. So Biography Magazine, which is a magazine published by the Biography TV channel, had a pressor profile of me, and it said, “In his spare time,” I’m not quoting exactly, it’s been many years, but, “In his spare time he enjoys playing chess with friends.” I thought, wow, that sounds great. I would like to be that guy. But actually, I play chess with my kids sometimes, but no it’s not a hobby of mine.

00:39:31
And I was like, where did they get that? And I contacted the magazine and said, where did that come from? They said, “Oh, it was in Wikipedia.” And I looked in the history, there had been vandalism of Wikipedia, which was not damaging, it’s just false. And it had already been removed. But then I thought, “Oh gosh, well I better mention this to people because otherwise it’s somebody’s going to read that and they’re going to add it, the entry, and is going to take on a life of its own. And then sometimes I wonder if it has, because I’ve been… I was invited a few years ago to do the ceremonial first move in the world chess championship. And I thought, I wonder if they think I’m a really big chess enthusiast because they read this Biography Magazine article.

00:40:10
But that problem, when we think about large language models and the ability to quickly generate very plausible but not true content, I think is something that there’s going to be a lot of shakeout and a lot of implications of that.
Lex Fridman
00:40:25
What would be hilarious is because of the social pressure of Wikipedia and the momentum, you would actually start playing a lot more chess. Not only the articles are written based on Wikipedia, but your own life trajectory changes because of the Wikipedia, just to make it more convenient. Aspire to.
Jimmy Wales
00:40:45
Aspire to, yes. Yeah, aspirational.
Lex Fridman
00:40:48
If we could just talk about that before we jump back to some other interesting topics in Wikipedia. Let’s talk about GPT-4 and large language models. So they are in part trained on Wikipedia content. What are the pros and cons of these language models? What are your thoughts?
Jimmy Wales
00:41:07
Yeah, so I mean, there’s a lot of stuff going on. Obviously the technology has moved very quickly in the last six months and looks poised to do so for some time to come. So first things first, part of our philosophy is the open licensing, the free licensing, the idea that this is what we’re here for. We are a volunteer community and we write this encyclopedia. We give it to the world to do what you like with, you can modify it, redistribute it, redistribute modified versions, commercially, non-commercially. This is the licensing. So in that sense, of course it’s completely fine. Now, we do worry a bit about attribution because it is a Creative Commons Attribution Share-Alike License. So attribution is important, not just because of our licensing model and things like that, but it’s just proper attribution is just good intellectual practice.

00:42:02
And that’s a really hard complicated question. If I were to write something about my visit here, I might say in a blog post I was in Austin, which is a city in Texas, I’m not going to put a source for Austin as a city in Texas. That’s just general knowledge. I learned it somewhere, I can’t tell you where. So you don’t have to cite and reference every single thing. But if I actually did research and I used something very heavily, it’s just proper, morally proper, to give your sources. So we would like to see that. And obviously they call it grounding. So particularly people at Google are really keen on figuring out grounding.
Lex Fridman
00:42:48
It’s such a cool term. So any text that’s generated trying to ground it to the Wikipedia quality-
Jimmy Wales
00:42:57
A source.
Lex Fridman
00:42:57
… a source. The same kind of standard of what a source means that Wikipedia uses, the same kind of source-
Jimmy Wales
00:42:57
The same kind.
Lex Fridman
00:42:57
… would be generated but with a graph.
Jimmy Wales
00:43:05
The same kind of thing. And of course, one of the biggest flaws in ChatGPT right now is that it just literally will make things up just to be amiable. I think it’s programmed to be very helpful and amiable and it doesn’t really know or care about the truth.
Lex Fridman
00:43:21
Can get bullied into… it can be convinced into…
Jimmy Wales
00:43:25
Well, but this morning, the story I was telling earlier about comparing a football player to a Lamborghini, and I thought, is that really racial? I don’t know, but I’m mulling it over. And I thought, oh, I’m going to go to ChatGPT. So I sent to ChatGPT-4, I said, “This happened in Wikipedia. Can you think of examples where a white athlete has been compared to a fast car inanimate object?” And it comes back with a very plausible essay where it tells why these analogies are common in sport, blah, blah. I said, “No, no, could you give me some specific examples?” So it gives me three specific examples, very plausible, correct names of athletes and contemporaries and all of that could have been true. Googled every single quote and none of them existed. And so I’m like, “Well, that’s really not good.”

00:44:14
I wanted to explore a thought process I was in. First I thought, how do I Google? And it’s like, well, it’s kind of a hard thing to Google because unless somebody’s written about this specific topic, it’s large language model, it’s processed all this data, it can probably piece that together for me, but it just can’t yet. So I think, I hope that ChatGPT 5, 6, 7, three to five years, I’m hoping we’ll see a much higher level of accuracy where when you ask a question like that, I think instead of being quite so eager to please by giving you a plausible sounding answer, it’s just like, I don’t know.
Lex Fridman
00:44:55
Or maybe display how much bullshit might be in this generated text. I’m really would like to make you happy right now, but I’m really stretched thin with this generation.
Jimmy Wales
00:45:07
Well, it’s one of the things I’ve said for a long time. So in Wikipedia, one of the great things we do may not be great for our reputation, except in a deeper sense for the long term I think it is. But we’ll all be on notice that says the neutrality of this section has been disputed or the following section doesn’t cite in these sources. And I always joke, sometimes I wish the New York Times would run a banner saying the neutrality of this has been disputed. They could give us a… We had a big fight in the newsroom as to whether to run this or not, but we thought it’s important enough to bring it to. But just be aware that not all the journalists are on board with it. Ah, that’s actually interesting, and that’s fine. I would trust them more for that level of transparency. So yeah, similarly ChatGPT should say, yeah, 87% bullshit.
Lex Fridman
00:45:51
Well, the neutrality one is really interesting because that’s basically a summary of the discussions that are going on underneath. It would be amazing if… I should be honest, I don’t look at the talk page often. It would be nice somehow if there was a kind of summary in this banner way of like, this, lots of wars have been fought on this here land for this here paragraph.
Jimmy Wales
00:46:16
That’s really interesting, I hadn’t thought of that. Because one of the things I do spend a lot of time thinking about these days, and people have found it, we’re moving slowly, but we are moving. Thinking about, okay, these tools exist, are there ways that this stuff can be useful to our community? Because a part of it is we do approach things in a non-commercial way, in a really deep sense. It’s like it’s been great, that Wikipedia has become very popular, but really we’re a community whose hobby is writing an encyclopedia. That’s first, and if it’s popular, great. If it’s not okay, we might have trouble paying for more servers, but it’ll be fine.

00:46:53
And so how do we help the community use these tools? One of the ways that these tools can support people, and one example I never thought about, I’m going to start playing with it, is feed in the article and feed in the talk page and say, can you suggest some warnings in the article based on the conversations in the talk page? I think it might-
Lex Fridman
00:46:53
That’s brilliant.
Jimmy Wales
00:47:12
… be good at that. It might get it wrong sometimes. But again, if it’s reasonably successful at doing that, and you can say, oh, actually, yeah, it does suggest the neutrality of this has been disputed on a section that has a seven-page discussion in the back that might be useful, don’t know, worth playing with.
Lex Fridman
00:47:30
Yeah, I mean some more color to the, not neutrality, but also the amount of emotion laden in the exploration of this particular part of the topic. It might actually help you look at more controversial pages, like on a page on the war in Ukraine or a page on Israel and Palestine. There could be parts that everyone agrees on and there’s parts that are just like-
Jimmy Wales
00:47:58
Tough.
Lex Fridman
00:47:59
… tough.
Jimmy Wales
00:47:59
The hard parts.
Lex Fridman
00:48:00
It would be nice to, when looking at those beautiful long articles to know, all right, let me just take in some stuff where everybody agrees on.
Jimmy Wales
00:48:09
I could give an example that I haven’t looked at in a long time, but I was really pleased with what I saw at the time. So the discussion was that they’re building something in Israel and for their own political reasons, one side calls it a wall hearkening back to Berlin Wall, apartheid, the other calls it a security fence. So we can understand quite quickly if we give it a moment’s thought like, okay, I understand why people would have this grappling over the language. Like, okay, you want to highlight the negative aspects of this and you want to highlight the positive aspects, so you’re going to try and choose a different name. And so there was this really fantastic Wikipedia discussion on the talk page. How do we word that paragraph to talk about the different naming? It’s called this by Israeli, it’s called this by Palestinians. And how you explain that to people could be quite charged. You could easily explain, oh, there’s this difference and it’s because this side’s good and this side’s bad and that’s why there’s a difference. Or you could say, actually, let’s just try and really stay as neutral as we can and try to explain the reasons. So you may come away from it with a concept. Oh, okay, I understand what this debate is about now.
Lex Fridman
00:49:26
And just the term Israel- Palestine conflict is still the title of a page in Wikipedia, but the word conflict is something that is a charged word.
Jimmy Wales
00:49:41
Of course.
Lex Fridman
00:49:42
Because from the Palestinian side or from certain sides, the word conflict doesn’t accurately describe the situation. Because if you see it as a genocide one way, genocide is not a conflict because to people that discuss the challenge, the word conflict, they see conflict is when there’s two equally powerful sides fighting.
Jimmy Wales
00:50:05
Sure, yeah, yeah. No, it’s hard. And in a number of cases, so this actually speaks to a slightly broader phenomenon, which is there are a number of cases where there is no one word that can get consensus. And in the body of an article, that’s usually okay, because we can explain the whole thing. You can come away with an understanding of why each side wants to use a certain word, but there are some aspects, like the page has to have a title, so there’s that. Same thing with certain things like photos. It’s like, well, there’s different photos, which one’s best? Lot of different views on that. But at the end of the day, you need the lead photo because there’s one slot for a lead photo. Categories is another one. So at one point, I have no idea if it’s in there today, but I don’t think so. I was listed in American entrepreneurs fine.

00:51:03
American atheist, and I said, that doesn’t feel right to me, just personally it’s true. I mean, wouldn’t disagree with the objective fact of it, but when you click the category and you see a lot of people who are, you might say American atheist activist because that’s their big issue. So Madalyne Murray O’Hair or various famous people who… Richard Dawkins, who make it a big part of their public argument and persona. But that’s not true of me. It’s just my private personal belief, it doesn’t really… it’s not something I campaign about. So it felt weird to put me in the category, but what category would you put? And do you need that? In this case I argued that doesn’t need that. I don’t speak about it publicly, except incidentally, from time to time, I don’t campaign about it. So it’s weird to put me with this group of people.

00:51:54
And that argument carried the day, I hope not just because it was me. But categories can be like that where you’re either in the category or you’re not. And sometimes it’s a lot more complicated than that. And is it, again, we go back to, is it undue weight? If someone who is now prominent in public life and generally considered to be a good person was convicted of something, let’s say DUI when they were young, we normally in normal discourse, we don’t think, oh, this person should be in the category of American criminals because you think, oh, a criminal. Yeah, technically speaking, it’s against the law to drive under the influence of alcohol and you were arrested and you spent a month in prison or whatever. But it’s odd to say that’s a criminal.

00:52:45
So just as an example in this area is Mark Wahlberg, Marky Mark is what I always think of him as, because that was his first sort of famous name, who I wouldn’t think should be listed as in the category, American criminal. Even though he did, he was convicted of quite a bad crime when he was a young person, but we don’t think of him as a criminal. Should the entry talk about that? Yeah, it’s actually an important part of his life story that he had a very rough youth and he could have gone down a really dark path and he turned his life around. That’s actually interesting. So categories are tricky.
Lex Fridman
00:53:20
Especially with people because we like to assign labels to people into ideas somehow, and those labels stick. And there’s certain words that have a lot of power, like criminal, like political left, right, center, anarchist, objectivist. What other philosophies are there? Marxist, communist, social democrat, democratic socialist, socialist, and if you add that as a category, all of a sudden it’s like, oh boy, you’re that guy now. And I don’t know if you want to be that guy.
Jimmy Wales
00:53:58
Well, there’s definitely some really charged ones like alt-right, I think it’s quite complicated and tough. It’s not completely meaningless label, but boy, I think you really have to pause before you actually put that label on someone, partly because now you’re putting them in a group of people, some of whom are quite, you wouldn’t want to be grouped with.
Lex Fridman
00:54:20
Let’s go into some, you mentioned the hot water of the pool that we’re both tipping a toe in. Do you think Wikipedia has a left leaning political bias, which is something it is sometimes accused of?
Jimmy Wales
00:54:31
Yeah, so I don’t think so, not broadly. And I think you can always point to specific entries and talk about specific biases, but that’s part of the process of Wikipedia. Anyone can come and challenge and to go on about that. But I see fairly often on Twitter, some quite extreme accusations of bias. And I think actually I don’t see it. I don’t buy that. And if you ask people for an example, they normally struggle and depending on who they are and what it’s about. So it’s certainly true that some people who have quite fringe viewpoints and who knows the full rush of history in 500 years, they might be considered to be pathbreaking geniuses. But at the moment, quite fringe views. And they’re just unhappy that Wikipedia doesn’t report on their fringe views as being mainstream. And that, by the way, goes across all kinds of fields.

00:55:36
I was once accosted on the street outside the TED Conference in Vancouver by a guy who was a homeopath who was very upset that Wikipedia’s entry on homeopathy basically says it’s pseudoscience. And he felt that was biased. And I said, “Well, I can’t really help you because we cite good quality sources to talk about the scientific status, and it’s not very good.” So it depends, and I think it’s something that we should always be vigilant about. But in general, I think we’re pretty good. And I think any time you go to any serious political controversy, we should have a pretty balanced perspective on whose saying what and what the views are and so forth. I would actually argue that the areas where we are more likely to have bias that persists for a long period of time are actually fairly obscure things, or maybe fairly non-political things.

00:56:40
I just give, it’s kind of a humorous example, but it’s meaningful. If you read our entries about Japanese anime, they tend to be very, very positive and very favorable because almost no one knows about Japanese anime except for fans. And so the people who come and spend their days writing Japanese anime articles, they love it. They kind of have an inherent love for the whole area. Now they’ll of course, being human beings, they have their internal debates and disputes about what’s better or not. But in general, they’re quite positive because nobody actually cares. On anything that people are quite passionate about, then hopefully there’s quite a lot of interesting stuff.

00:57:20
So I’ll give an example, a contemporary example where I think we’ve done a good job as of my most recent sort of look at it, and that is the question about the efficacy of masks during the COVID pandemic. And that’s an area where I would say the public authorities really jerked us all around a bit. In the very first days, they said, “Whatever you do, don’t rush on and buy masks.” And their concern was shortages in hospitals, fair enough. Later it’s like, no, everybody’s got to wear a mask everywhere. It really works really well. And then now I think it’s, the evidence is mixed, right? Masks seem to help, in my personal view, masks seem to help. They’re no huge burden. You might as well wear a mask in any environment where you’re with a giant crowd of people and so forth.

00:58:13
But it’s very politicized, that one, and it’s very politicized, where certainly in the US, much more so. I live in the UK, I live in London, I’ve never seen on the streets the kind of the thing that there’s a lot of reports of people actively angry because someone else is wearing a mask, that sort of thing in public. So because it became very politicized, then clearly if Wikipedia… No, so anyway, if you go to Wikipedia and you research this topic, I think you’ll find more or less what I’ve just said. Oh, actually after it’s all to this point in history, it’s mixed evidence like masks seemed to help, but maybe not as much as some of the authorities said. And here we are.

00:58:56
And that’s kind of an example where I think, okay, we’ve done a good job, but I suspect there are people on both sides of that very emotional debate who think, this is ridiculous. Hopefully we’ve got quality sources. So then hopefully those people who read this can say, oh, actually it is complicated. If you can get to the point of saying, okay, I have my view, but I understand other views and I do think it’s a complicated question, great, now we’re a little bit more mature as a society.
Lex Fridman
00:59:24
Well, that one is an interesting one because I feel like I hope that that article also contains the meta conversation about the politicization of that topic. To me, it’s almost more interesting than whether masks work or not, at least at this point. It’s like why masks became a symbol of the oppression of a centralized government. If you wear them, you’re a sheep that follows the mask control the mass hysteria of an authoritarian regime. And if you don’t wear a mask, then you are a science denier, anti- vaxxer, an alt-right, probably a Nazi.
Jimmy Wales
01:00:07
Exactly. And that whole politicization of society is just so damaging, and I don’t know, in the broader world, how do we start to fix that? That’s a really hard question.
Lex Fridman
01:00:21
Well, at every moment, because you mentioned mainstream and fringe, there seems to be a tension here, and I wonder what your philosophy is on it because there’s mainstream ideas and there’s fringe ideas. You look at lab leak theory for this virus. That could be other things we can discuss where there’s a mainstream narrative where if you just look at the percent of the population or the population with platforms, what they say, and then what is a small percentage in opposition to that, and what is Wikipedia’s responsibility to accurately represent both the mainstream and the fringe, do you think?
Jimmy Wales
01:01:05
Well, I think we have to try to do our best to recognize both, but also to appropriately contextualize. And so this can be quite hard, particularly when emotions are high. That’s just a fact about human beings. I’ll give a simpler example, because there’s not a lot of emotion around it. Like our entry on the moon doesn’t say, some say the moon’s made of rocks, some say cheese, who knows? That kind of false neutrality is not what we want to get to. That doesn’t make any sense, but that one’s easy. We all understand. I think there is a Wikipedia entry called something like the moon is made of cheese, where it talks about this is a common sort of joke or thing that children say or that people tell to children or whatever. It’s just a thing. Everyone’s heard moon’s made of cheese, but nobody thinks, wow, Wikipedia is so one-sided it doesn’t even acknowledge the cheese theory. I say the same thing about flat Earth, again, very-
Lex Fridman
01:02:08
That’s exactly what I’m looking up right now.
Jimmy Wales
01:02:09
… very little controversy. We will have an entry about flat Earth, theorizing, flat Earth people. My personal view is most of the people who claim to be flat earthers are just having a laugh, trolling and more power to them, have some fun, but let’s not be ridiculous.
Lex Fridman
01:02:31
Then of course, for mostly human history, people believe that the Earth is flat, so the article I’m looking at is actually kind of focusing on this history. Flat Earth is an archaic and scientifically disproven conception of the Earth’s shape as a plain or disc, meaning ancient cultures subscribe to a flat Earth cosmography with pretty cool pictures of what a flat Earth would look like, with dragon, is that a dragon no angels on the edge. There’s a lot of controversy about that. What is it the edge? Is it the wall? Is it angels, is it dragons, is there a dome?
Jimmy Wales
01:03:00
And how can you fly from South Africa to Perth? Because on a flat Earth view, that’s really too far for any plane to make it because-
Lex Fridman
01:03:09
What I want to know-
Jimmy Wales
01:03:10
It’s all spread out.
Lex Fridman
01:03:11
What I want to know is what’s on the other side, Jimmy, what’s on the other side? That’s what all of us want to know. So I presume there’s probably a small section about the conspiracy theory of flat Earth, because I think there’s a sizeable percent of the population who at least will say they believe in a flat Earth.
Jimmy Wales
01:03:31
Yeah.
Lex Fridman
01:03:32
I think it is a movement that just says that the mainstream narrative to have distrust and skepticism about the mainstream narrative, which to a very small degree, is probably a very productive thing to do as part of the scientific process. But you can get a little silly and ridiculous with it.
Jimmy Wales
01:03:49
Yeah, I mean it’s exactly right. And so I think I find on many, many cases, and of course I, like anybody else, might quibble about this or that in any Wikipedia article, but in general, I think there is a pretty good sort of willingness and indeed eagerness to say, oh, let’s fairly represent all of the meaningfully important sides. So there’s still a lot to unpack in that, right? So meaningfully important. So people who are raising questions about the efficacy of masks, okay, that’s actually a reasonable thing to have a discussion about, and hopefully we should treat that as a fair conversation to have and actually address which authorities have said what and so on and so forth. And then there are other cases where it’s not meaningful opposition, you just wouldn’t say. I doubt if the main article Moon, it may mention cheese, probably not even because it’s not credible and it’s not even meant to be serious by anyone, or the article on the Earth certainly won’t have a paragraph that says, well, most scientists think it’s round, but certain people think flat.

01:05:12
That’s just a silly thing to put in that article. You would want to sort of address that’s an interesting cultural phenomenon. You want to put it somewhere. So this goes into all kinds of things about politics. You want to be really careful, really thoughtful about not getting caught up in the anger of our times and really recognize. Yes, I always thought… I remember being really kind of proud of the US at the time when it was McCain was running against Obama because I thought, “Oh, I’ve got plenty of disagreements with both of them, but they both seem like thoughtful and interesting people who I would have different disagreements with.” But I always felt like, yeah, that that’s good, now we can have a debate. Now we can have an interesting debate. And it isn’t just people slamming each other, personal attacks and so forth.
Jimmy Wales
01:06:00
It isn’t just people slamming each other with personal attacks and so forth.
Lex Fridman
01:06:05
You’re saying Wikipedia has also represented that?
Jimmy Wales
01:06:09
I hope so. Yeah, and I think so in the main. Obviously, you can always find debate that went horribly wrong because there’s humans involved.
Lex Fridman
01:06:18
But speaking of those humans, I would venture to guess, I don’t know the data, maybe you can let me know, but the personal political leaning of the group of people who had a Wikipedia probably leans left, I would guess. To me, the question there is, I mean the same is true for Silicon Valley, the task for Silicon Valley is to create platforms that are not politically biased even though there is a bias for the engineers who create it. I believe it’s possible to do that. There’s conspiracy theories that it somehow is impossible, and there’s this whole conspiracy where the left is controlling it, and so on. I think engineers, for the most part, want to create platforms that are open and unbiased that create all kinds of perspective because that’s super exciting to have all kinds of perspectives battle it out, but still is there a degree to which the personal political bias of the editors might seep in in silly ways and in big ways?

01:07:22
Silly ways could be, I think, hopefully I’m correct in saying this, but the right will call it the Democrat Party and the left will call it the Democratic Party, right? It always hits my ear weird. Are we children here? We’re literally taking words and just jabbing at each other. Yeah, I could capitalize a thing in a certain way, or I can just take a word and mess with them. That’s a small way of how you use words, but you can also have a bigger way about beliefs, about various perspectives on political events, on Hunter Biden’s laptop, on how big of a story that is or not, how big the censorship of that story is or not, and then there’s these camps to take very strong points and they construct big narratives around that. It’s a very sizable percent of the population believes the two narratives that compete with each other.
Jimmy Wales
01:08:21
Yeah. It’s really interesting and it’s hard to judge the sweep of history within your own lifetime, but it feels like it’s gotten much worse, that this idea of two parallel universes where people can agree on certain basic facts feels worse than it used to be. I’m not sure if that’s true or if it just feels that way, but I’m not sure what the causes are. I think I would lay a lot of the blame in recent years on social media algorithms, which reward clickbait headlines, which reward tweets that go viral, and they go viral because they’re cute and clever.

01:09:13
My most successful tweet ever by a fairly wide margin, some reporter tweeted at Elon Musk because he was complaining about Wikipedia or something, “You should buy Wikipedia,” and I just wrote, “Bot for sale,” and 90 zillion retweets, and people liked it, and it was all very good, but I’m like, “You know what? It’s cute line and it’s a good mic drop,” and all that, and I was pleased with myself. I’m like, “It’s not really a discourse.” It’s not really what I like to do, but it’s what social media really rewards, which is kind of a let’s you and him have a fight, and that’s more interesting. It’s funny because at the time, I was texting with Elon who’s very pleasant to me, and all of that.
Lex Fridman
01:10:01
He might have been a little bit shitty, the reporter might have been a little bit shitty, but you fed into the shitty with a snarky funny of response, “Not for sale,” and where do you… That’s a funny little exchange, and you can probably after that laugh it off and it’s fun, but that kind of mechanism that rewards the snark can go into viciousness.
Jimmy Wales
01:10:22
Yeah. Well, and we certainly see it online. A series of tweets, sort of a tweet thread of 15 tweets that assesses the quality of the evidence for masks, pros and cons, and sort of wear this, that’s not going to go viral, but a SmackDown for a famous politician who was famously in favor of mask, who also went to a dinner and didn’t wear a mask, that’s going to go viral, and that’s partly human nature. People love to call out hypocrisy and all of that, but it’s partly what these systems elevate automatically. I talk about this with respect to Facebook, for example. I think Facebook has done a pretty good job, although it’s taken longer than it should in some cases, but if you have a very large following and you’re really spouting hatred or misinformation, disinformation, they’ve kicked people off.

01:11:24
They’ve done some reasonable things there, but actually, the deeper issue of the anger we’re talking about, of the contentiousness of everything, I make of a family example with two great stereotypes. One, the crackpot racist uncle, and one, the sweet grandma. I always want to point out all of my uncles in my family were wonderful people, so I didn’t have a crackpot racist, but everybody knows the stereotype. Well, so grandma, she just posts sweet comments on the kids’ pictures and congratulates people on their wedding anniversary, and crackpot uncle’s posting his nonsense. Normally, it’s at Christmas dinner, everybody rolls their eyes, “Oh, yeah, Uncle Frank’s here, and he is probably going to say some racist comment and we’re going to tell him to shut up, or maybe let’s not invite him this year.” Normal human drama. He’s got his three mates down at the pub who listen to him and all of that, but now grandma’s got 54 followers on Facebook, which is the intimate family, and racist uncle has 714, so he’s not a massive influence or whatever, but how did that happen?

01:12:36
It’s because the algorithm notices when she posts, nothing happens. He posts and then everybody jumps in to go, “God, shut up, Uncle Frank. That’s outrageous,” and there’s engagement, there’s page views, there’s ads. Those algorithms, I think they’re working to improve that, but it’s really hard for them. It’s hard to improve that if that actually is working. If the people who are saying things that get engagement, if it’s not too awful, but it’s just, maybe it’s not a racist uncle, but maybe it’s an uncle who posts a lot about what an idiot Biden is, which isn’t necessarily an offensive or blockable or bannable thing, and it shouldn’t be, but if that’s the discourse that gets elevated because it gets a rise out of people, then suddenly in a society, it’s like, “Oh, we get more of what we reward,” so I think that’s a piece of what’s gone on.
Lex Fridman
01:13:28
Well, if we could just take that tangent. I’m having a conversation with Mark Zuckerberg second time. Is there something you can comment on how to decrease toxicity on that particular platform, Facebook? You also have worked on creating a social network that is less toxic yourself, so can we just talk about the different ideas that these already big social network can do and what you have been trying to do?
Jimmy Wales
01:13:55
A piece of it is it’s hard. The problem with making a recommendation to Facebook is that I actually believe their business model makes it really hard for them, and I’m not anti-capitalism, I’m not, “Great. Somebody’s got business, they’re making money,” that’s not where I come from, but certain business models mean you are going to prioritize things that maybe aren’t longterm healthful, and so that’s a big piece of it. Certainly, for Facebook, you could say with vast resources, start to prioritize content that’s higher quality, that’s healing, that’s kind. Try not to prioritize content that seems to be just getting a rise out of people. Now, those are vague human descriptions, but I do believe good machine running algorithms, you can optimize in slightly different ways, but to do that, you may have to say, “Actually, we’re not necessarily going to increase page views to the maximum extent right now.”

01:14:59
I’ve said this to people at Facebook. It’s like if your actions are convincing people that you’re breaking Western civilization, that’s a really bad for business in the long run. Certainly, these days, I’ll say, Twitter is the thing that’s on people’s minds as being more upsetting at the moment, but I think it’s true. One of the things that’s really interesting about Facebook compared to a lot of companies is that Mark has a pretty unprecedented amount of power. His ability to name members of the board, his control of the company is pretty hard to break even if financial results aren’t as good as they could be because he’s taken a step back from the perfect optimization to say, “Actually, for the longterm health in the next 50 years of this organization, we need to reign in some of the things that are working for us in making money because they’re actually giving us a bad reputation.” One of the recommendations I would say is, and this is not to do with the algorithms and all that, but how about just a moratorium on all political advertising?

01:16:11
I don’t think it’s their most profitable segment, but it’s given rise to a lot of deep, hard questions about dark money, about ads that are run by questionable people that push false narratives, or the classic kind of thing is you run… I saw a study about Brexit in the UK where people were talking about there were ads run to animal rights activists saying, “Finally, when we’re out from under Europe, the UK can pass proper animal rights legislation. We’re not constrained by the European process.” Similarly, for people who are advocates of fox hunting to say, “Finally, when we’re out of Europe, we can re-implement…” You’re telling people what they want to hear, and in some cases, it’s really hard for journalists to see that, so it used to be that for political advertising, you really needed to find some kind of mainstream narrative, and this is still true to an extent, mainstream narrative that 60% of people can say, “Oh, I can buy into that,” which meant it pushed you to the center.

01:17:20
It pushed you to try and find some nuance balance, but if your main method of recruiting people is a tiny little one-on-one conversation with them, because you’re able to target using targeted advertising, suddenly you don’t need consistent. You just need a really good targeting operation, really good Cambridge analytic style machine learning algorithm data to convince people. That just feels really problematic, so until they can think about how to solve that problem, I would just say, “You know what? It’s going to cost us X amount,” but it’s going to be worth it to kind of say, “You know what? We actually think our political advertising policy hasn’t really helped contribute to discourse and dialogue in finding reasoned middle ground and compromise solutions, so let’s just not do that for a while until we figure that out,” so that’s maybe a piece of advice.
Lex Fridman
01:18:15
Coupled with, as you were saying, recommender systems for the newsfeed and other contexts that don’t always optimize engagement, but optimize the long term mental wellbeing and balance and growth of a human being, but it’s a very difficult problem.
Jimmy Wales
01:18:33
It’s a difficult problem. Yeah. With WT Social, WikiTribune Social, we’re launching in a few months time a completely new system, new domain, and new lots of things, but the idea is to say let’s focus on trust. People can rate each other as trustworthy, rate content as trustworthy. You have to start from somewhere so it’ll start with a core base of our tiny community who, I think, are sensible, thoughtful people, want to recruit more, but to say, “You know what? Actually, let’s have that as a pretty strong element,” to say let’s not optimize based on what gets the most paid views in this session, let’s optimize on what the feedback from people is, this is meaningfully enhancing my life. Part of that is, and it’s probably not a good business model, but part of that is say, “Okay, we’re not going to pursue an advertising business model, but a membership model where you don’t have to be a member, but you can pay to be a member.”

01:19:36
You maybe get some benefit from that, but in general, to say, actually the problem with… Actually, the division I would say is, and the analogy I would give is broadcast television funded by advertising gives you a different result than paying for HBO, paying for Netflix, paying for whatever. The reason is, if you think about it, what is your incentive as a TV producer? You’re going to make a comedy for ABC Network in the US, you basically say, “I want something that almost everybody will like and listen to,” so it tends to be a little blander, family-friendly, whatever. Whereas if you say, “Oh, actually,” I’m not going to use the HBO example, and an old example, you say, “You know what? Sopranos isn’t for everybody, Sex and the City isn’t for everybody, but between the two shows, we’ve got something for everybody that they’re willing to pay for,” so you can get edgier, higher quality in my own view content rather than saying it’s got to not offend anybody in the world. It’s got to be for everybody, which is really hard.

01:20:47
Same thing here in a social network. If your business model is advertising, it’s going to drive you in one direction. If your business model is membership, I think it drives you in a different direction. Actually, and I’ve said this to Elon about Twitter Blue, which I think wasn’t rolled out well and so forth, but the piece of that that I like is to say, look, actually, if there’s a model where your revenue is coming from people who are willing to pay for the service, even if it’s only part of your revenue, if it’s a substantial part, that does change your broader incentives to say, actually, are people going to be willing to pay for something that’s actually just toxicity in their lives? Now, I’m not sure it’s been rolled out well, I’m not sure how it’s going, and maybe I’m wrong about that as a plausible business model, but I do think it’s interesting to think about, just in broad terms, business model drives outcomes in sometimes surprising ways unless you really pause to think about it.
Lex Fridman
01:21:46
If we can just link on Twitter and Elon before… I would love to talk to you about the underlying business model, Wikipedia, which is this brilliant, bold move at the very beginning, but since you mentioned Twitter, what do you think works? What do you think is broken about Twitter?
Jimmy Wales
01:22:03
It’s a long conversation, but to start with, one of the things that I always say is it’s a really hard problem, so I concede that right up front. I said this about the old ownership of Twitter and the new ownership of Twitter because unlike Wikipedia, and this is true actually for all social media, there’s a box, and the box basically says, “What do you think? What’s on your mind?” You can write whatever the hell you want, right? This is true, by the way, even for YouTube. I mean the box is to upload a video, but again, it’s just an open-ended invitation to express yourself.

01:22:38
What makes that hard is some people have really toxic, really bad, some people are very aggressive, they’re actually stalking, they’re actually abusive, and suddenly, you deal with a lot of problems. Whereas at Wikipedia, there is no box that says, “What’s on your mind?” There’s a box that says, “This is an entry about the moon. Please be neutral. Please set your facts.” Then there’s a talk page which is not coming rant about Donald Trump. If you go on the talk page of the Donald Trump entry and you just start ranting about Donald Trump, people would say, “What are you doing? Stop doing that. We’re not here to discuss. There’s a whole world of the internet out there for you to go and rant about Donald Trump.”
Lex Fridman
01:23:17
It’s just not fun to do on Wikipedia as somehow as fun on Twitter.
Jimmy Wales
01:23:20
Well, also on Wikipedia, people are going to say, “Stop,” and, “Actually, are you here to tell us how can we improve the article or are you just here to rant about Trump? Because that’s not actually interesting.” Because the goal is different, so that’s just admitting and saying upfront, this is a hard problem. Certainly, I’m writing a book on trust. The idea is, in the last 20 years, we’ve lost trust in all kinds of institutions, in politics. The Edelman Trust Barometer Survey has been done for a long time, and trust in politicians, trust in journalism, it’s come declined substantially, and I think in many cases, deservedly, so how do we restore trust and how do we think about that?
Lex Fridman
01:24:07
Does that also include trust in the idea of truth?
Jimmy Wales
01:24:13
Trust in the idea of truth. Even the concept of facts and truth is really, really important, and the idea of uncomfortable truths is really important. When we look at Twitter and we can see, okay, this is really hard, so here’s my story about Twitter. It’s a two-part story, and it’s all pre Elon Musk ownership. Many years back, somebody accused me of horrible crimes on Twitter, and like anybody would, I was like… I’m in the public eye. People say bad things. I don’t really… I brush it off, whatever, but I’m like, “This is actually really bad.” Accusing me of pedophilia? That’s just not okay, so I thought, “I’m going to report this,” so I click report, and I report the tweet and there’s five others, and I report, and I go through the process, and then I get an email that says whatever, a couple of hours later saying, “Thank you for your report. We’re looking into this.” Great. Okay, good.

01:25:16
Then several hours further, I get an email back saying, “Sorry, we don’t see anything here to violate our terms of use,” and I’m like, “Okay,” so I emailed Jack and I say, “Jack, come on. This is ridiculous,” and he emails back roughly saying, “Yeah, sorry, Jimmy. Don’t worry. We’ll sort this out.” I just thought to myself, “You know what? That’s not the point. I’m Jimmy Wales, I know Jack Dorsey. I can email Jack Dorsey. He’ll listen to me because he’s got an email from me and sorts it out for me.” What about the teenager who’s being bullied and is getting abuse and getting accusations that aren’t true? Are they getting the same kind of really poor result in that case? Fast-forward a few years, same thing happens. The exact quote, it goes, “Please help me. I’m only 10 years old, and Jimmy Wales raped me last week.” I was like, “Come on. Fuck off. That’s ridiculous,” so I report. I’m like, “This time I’m reporting,” but I’m thinking, “Well, we’ll see what happens.”

01:26:15
This one gets even worse because then I get a same result email back saying, “Sorry, we don’t see any problems,” so I raised it with other members of the board who I know, and Jack, and like, “This is really ridiculous. This is outrageous,” and some of the board members, friends of mine, sympathetic, and so good for them, but I actually got an email back then from the general counsel head of trust and safety saying, “Actually, there’s nothing in this tweet that violates our terms of service. We don’t regard and gave reference to the Me Too Movement. If we didn’t allow accusations, the Me Too Movement, it’s an important thing,” and I was like, “You know what? Actually, if someone says, ‘I’m 10 years old and someone raped me last week,’ I think the advice should be, ‘Here’s the phone number of the police.’ You need to get the police involved. Twitter’s not the place for that accusation.”

01:27:05
Even back then… By the way, they did delete those tweets, but the rationale they gave is spammy behavior, so completely separate from abusing me. It was just like, “Oh, well, they were retweeting too often.” Okay, whatever. That’s just broken. That’s a system that it’s not working for people in the public eye. I’m sure it’s not working for private people who get abuse. Really horrible abuse can happen. How is that today? Well, it hasn’t happened to me since Elon took over, but I don’t see why it couldn’t, and I suspect now if I send a report and email someone, there’s no one there to email me back because he’s gotten rid of a lot of the trust and safety staff, so I suspect that problem is still really hard.
Lex Fridman
01:27:46
Just content moderation at huge scales.
Jimmy Wales
01:27:49
At huge scales is really something. I don’t know the full answer to this. A piece of it could be to say, “Actually, making specific allegations of crimes, this isn’t the place to do that. We’ve got a huge database. If you’ve got an accusation of crime, here’s who should call, the police, the FBI, whatever it is. It’s not to be done in public,” and then you do face really complicated questions about Me Too Movement and people coming forward in public and all of that, but again, it’s like probably you should talk to a journalist. Probably there are better avenues than just tweeting from an account that was created 10 days ago, obviously set up to abuse someone. I think they could do a lot better, but I also admit it’s a hard problem.
Lex Fridman
01:28:38
There’s also ways to indirectly or more humorously or a more mocking way to make the same kinds of accusations. In fact, the accusations you mentioned, if I were to guess, don’t go that viral because they’re not funny enough or cutting enough, but if you make it witty and cutting and meme it somehow, sometimes actually indirectly making an accusation versus directly making an accusation, that can go viral and that can destroy reputations, and you get to watch yourself. Just all kinds of narratives take hold.
Jimmy Wales
01:29:09
Yeah, no, I remember another case that didn’t bother me because it wasn’t of that nature, but somebody was saying, “I’m sure you’re making millions off of Wikipedia,” and I’m like, “No, actually, I don’t even work there. I have no salary,” and they’re like, “You’re lying. I’m going to check your 990 form,” which is the US form for tax reporting for charities, and I was like, “Yeah, here’s the link. Go read it and you’ll see I’m listed as a board member, and my salary is listed as zero.” Things like that, it’s like, “Okay.” That one, that feels like you’re wrong, but I can take that and we can have that debate quite quickly.

01:29:52
Again, it didn’t go viral because it was kind of silly, and if anything would’ve gone viral, it was me responding, but that’s one where it’s like, actually, I’m happy to respond because a lot of people don’t know that I don’t work there and that I don’t make millions, and I’m not a billionaire. Well, they must know that because it’s in most news media about me, but the other one, I didn’t respond to publicly because it’s like Barbara Streisand effect. It’s like sometimes calling attention to someone who’s abusing you who basically has no followers and so on is just a waste.
Lex Fridman
01:30:24
And everything you’re describing now is just something that all of us have to learn because everybody’s in the public eye. I think when you have just two followers and you get bullied by one of the followers, it hurts just as much as when you have a large number, so it’s not… Your situation, I think it’s echoed in the situations of millions of other, especially teenagers and kids and so on.
Jimmy Wales
01:30:43
Yeah, no, it’s actually an example. We don’t generally use my picture and the banners anymore on Wikipedia, but we did, and then we did an experiment one year where we tried other people’s pictures, so one of our developers, and one lovely, very sweet guy, and he doesn’t look like your immediate thought of a nerdy Silicon Valley developer. He looks like a heavy metal dude because he’s cool. Suddenly, here he is with long hair and tattoos, and there’s his sort of say, “Here’s what your money goes for. Here’s my letter asking for support,” and he got massive abuse from Wikipedia, like calling him creepy, and really massive. This was being shown to 80 million people a day, his picture, not the abuse. The abuse was elsewhere on the internet. He was bothered by it.

01:31:39
I thought, “You know what? There is a difference.” I actually am in the public eye. I get huge benefits from being in the public eye. I go around and make public speeches. Any random thing I think of, I can write and get it published in the New York Times, and I have this interesting life. He’s not a public figure, and so actually he wasn’t mad at us. It was just like, actually, suddenly being thrust in the public eye and you get suddenly lots of abuse, which normally, I think if you’re a teenager and somebody in your class is abusing you, it’s not going to go viral. It’s going to be hurtful because it’s local and it’s your classmates or whatever, but when ordinary people go viral in some abusive way, it’s really, really quite tragic.
Lex Fridman
01:32:24
I don’t know. Even at a small scale, it feels viral. When five people at your school, and there’s a rumor, and there’s this feeling like you’re surrounded, and the feeling of loneliness, I think, which you’re speaking to when you at least feel like you don’t have a platform to defend yourself, and then this powerlessness, that I think a lot of teenagers definitely feel, and a lot of people-
Jimmy Wales
01:32:49
I think you’re right.
Lex Fridman
01:32:51
I think even when just two people make up stuff about you or lie about you or say mean things about you or bully you, that can feel like a crowd.
Jimmy Wales
01:33:01
Yeah. No, that’s true.
Lex Fridman
01:33:03
Whatever that is in our genetics and our biology and the way our brain works, that just can be a terrifying experience. Somehow, to correct that, I think because everybody feels the pain of that, everybody suffers the pain of that, I think we’ll be forced to fix that as a society, to figure out a way around that.
Jimmy Wales
01:33:22
I think it’s really hard to fix because I don’t think that problem isn’t necessarily new. Someone in high school who writes graffiti that says, “Becky is a slut,” and spreads a rumor about what Becky did last weekend, that’s always been damaging, it’s always been hurtful, and that’s really hard.
Lex Fridman
01:33:45
Those kinds of attacks, there is oldest time itself, they proceed the internet. Now, what do you think about this technology that feels Wikipedia like, which is community notes on Twitter? Do you like it? Pros and cons? Do you think it’s scalable?
Jimmy Wales
01:34:00
I do like it. I don’t know enough about specifically how it’s implemented to really have a very deep view, but I do think it’s quite… The uses I’ve seen of it, I’ve found quite good, and in some cases, changed my mind. It’s like I see something, and of course, the human tendency is to retweet something that you hope is true or that you are afraid is true, or it’s that kind of quick mental action. Then I saw something that I liked and agreed with, and then a community note under it that made me think, “Oh, actually, this is a more nuanced issue,” so I like that. I think that’s really important. Now, how is it specifically implemented? Is it scalable or that? I don’t really know how they’ve done it, so I can’t really comment on that, but in general, I do think when your only mechanisms on Twitter, and you’re a big Twitter user, we know the platform and you’ve got plenty of followers and all of that, the only mechanisms are retweeting, replying, blocking.

01:35:13
It’s a pretty limited scope, and it’s kind of good if there’s a way to elevate a specific thoughtful response. It kind of goes to, again, does the algorithm just pick the retweet or the… I mean retweeting, it’s not even the algorithm that makes it viral. If Paulo Coelho, very famous author, I think he’s got… I don’t know. I haven’t looked lately. He used to have eight million Twitter followers. I think I looked, he’s got 16 million now or whatever. Well, if he retweets something, it’s going to get seen a lot. Elon Musk, if he retweets something, it’s going to get seen a lot. That’s not an algorithm. That’s just the way the platform works. So, it is kind of nice if you have something else, and how that’s something else is designed, that’s obviously complicated question.
Lex Fridman
01:35:58
Well, there’s this interesting thing that I think Twitter is doing, but I know Facebook is doing for sure, which is really interesting. What are the signals that a human can provide at scale? In Twitter, it’s retweet. In Facebook, I think you can share. I think, yeah, but there’s basic interactions, you can have comment and so on, but there’s also, in Facebook, and YouTube has this too is, “Would you like to see more of this or would you like to see less of this?” They post that sometimes. The thing that the neural net that’s learning from that has to figure out is the intent behind you saying, “I want to see less of this.”

01:36:39
Did you see too much of this content already? You like it, but you don’t want to see so much of it. You already figured it out, great. Or does this content not make you feel good? There’s so many interpretations that I would like to see less of this, but if you get that kind of signal, this actually can create a really powerfully curated list of content that is fed to you every day that doesn’t create an echo chamber or a silo, that actually just makes you feel good in the good way, which it challenges you, but it doesn’t exhaust you and make you this weird animal.
Jimmy Wales
01:37:20
I’ve been saying for a long time, if I went on Facebook one morning and they said, Ooh, we’re testing a new option. Rather than showing you things we think you’re going to like, we want to show you some things that we think you will disagree with, but which we have some signals that suggest it’s of quality,” I’m like, “Now, that sounds interesting.”
Lex Fridman
01:37:40
Yeah, that sounds really interesting.
Jimmy Wales
01:37:41
I want to see something where… Oh, I don’t agree with… Larry Lessig is a good friend of mine, founder of Creative Commons, and he’s moved on to doing stuff about corruption and politics and so on. I don’t always agree with Larry, but I always grapple with Larry because he’s so interesting and he’s so thoughtful, that even when we don’t agree, I’m like, “Actually, I want to hear him out because I’m going to learn from it,” and that doesn’t mean I always come around to agree with him, but I’m going to understand a perspective, and that’s really great feeling.
Lex Fridman
01:38:12
Yeah, there’s this interesting thing on social media where people accuse others of saying, “Well, you don’t want to hear opinions that you disagree with or ideas you disagree with.” I think this is something that’s thrown at me all the time. The reality is there’s literally almost nothing I enjoy more.
Jimmy Wales
01:38:29
It seems an odd thing to accuse you of because you have quite a wide range of long conversations with a very diverse bunch of people.
Lex Fridman
01:38:35
But there is a very, very harsh drop off because what I like is high quality disagreement. That really makes me think. At a certain point, there’s a threshold, it’s a kind of a gray area when the quality of the disagreement, it just sounds like mocking, and you’re not really interested in a deep understanding of the topic, or you yourself don’t seem to carry deep understanding of the topic. There’s something called intelligence square debates that may-
Lex Fridman
01:39:00
There’s something called Intelligence Squared debates. The main one is the British version. With the British accent, everything always sounds better. And the Brits seem to argue more intensely, like they’re invigorated, they’re energized by the debate. Those people I often disagree with, basically everybody involved, and it’s so fun. I learned something. That’s high quality. If we could do that, if there’s some way for me to click a button that says, “Filter out lower quality just today,” just sometimes show it to me because I want to be able to, but today I’m just not in the mood for the mockery.

01:39:38
Just high quality stuff, because even flat Earth, I want to get high quality arguments for the flat Earth. It would make me feel good because I would see, “Oh, that’s really interesting. I never really thought in my mind to challenge the mainstream narrative of general relativity, of a perception of physics. Maybe all of reality, maybe all of space is an illusion. That’s really interesting. I never really thought about, let me consider that fully. Okay, what’s the evidence? How would you test that? What are the alternatives? How would you be able to have such consistent perception of a physical reality, if it’s all of it is an illusion? All of us seem to share the same kind of perception of reality,” that’s the kind of stuff I love, but not the mockery of it that cheap, that it seems that social media can inspire.
Jimmy Wales
01:40:34
Yeah. I talk sometimes about how people assume that the big debates in Wikipedia or the arguments are between the party of the left and the party of the right. And I would say no, it’s actually the party of the kind and thoughtful and the party of the jerks, is really it. Left and yeah, yeah, bring me somebody I disagree with politically. As long as they’re thoughtful, kind, we’re going to have a real discussion. I give an example of our article on abortion: so, if you can bring together a kind and thoughtful Catholic priest and a kind and thoughtful Planned Parenthood activist and they’re going to work together on the article on abortion, that can be a really great thing, if they’re both kind and thoughtful. That’s the important part. They’re never going to agree on the topic, but they will understand, okay, Wikipedia is not going to take a side, but Wikipedia is going to explain what the debate is about, and we’re going to try to characterize it fairly.

01:41:36
And it turns out your kind and thoughtful people, even if they’re quite ideological, like a Catholic priest is generally going to be quite ideological on the subject of abortion, but they can grapple with ideas and they can discuss, and they may feel very proud of the entry at the end of the day, not because they suppress the other side’s views, but because they think the case has been stated very well that other people can come to understand it. And if you’re highly ideological, you assume, I think naturally, “If people understood as much about this as I do, they’ll probably agree with me.” You may be wrong about that, but that’s often the case. So, that’s what I think we need to encourage more of in society generally, is grappling with ideas in a really thoughtful way.
Lex Fridman
01:42:21
So is it possible if the majority of volunteers, editors of Wikipedia really disliked Donald Trump, are they still able to write an article that empathizes with the perspective of, for time at least, a very large percentage of the United States that were supported of Donald Trump, and to have a full broad representation of him as a human being, him as a political leader, him as a set of policies promised and implemented, all that kind of stuff?
Jimmy Wales
01:42:55
Yeah, I think so. And I think if you read the article, it’s pretty good. And I think a piece of that is within our community, if people have the self-awareness to understand. So, I personally wouldn’t go and edit the entry on Donald Trump. I get emotional about it and I’m like, “I’m not good at this,” and if I tried to do it, I would fail. I wouldn’t be a good Wikipedian, so it’s better if I just step back and let people who are more dispassionate on this topic edit it. Whereas there are other topics that are incredibly emotional to some people where I can actually do quite well. I’m going to be okay. Maybe we were discussing earlier the efficacy of masks. I’m like, “Oh, I think that’s an interesting problem. And I don’t know the answer, but I can help catalog what’s the best evidence and so on.”

01:43:48
I’m not going to get upset. I’m not going to get angry, able to be a good Wikipedian, so I think that’s important. And I do think though in a related framework that the composition of the community is really important. Not because Wikipedia is or should be a battleground, but because blind spots, like maybe I don’t even realize what’s biased if I’m particularly of a certain point of view, and I’ve never thought much about it. So one of the things we focus on a lot, the Wikipedia volunteers are, we don’t know the exact number, but let’s say 80% plus male, and they’re a certain demographic: they tend to be college educated, heavier on tech geeks than not, et cetera. So, there is a demographic to the community, and that’s pretty much global. Somebody said to me once, “Why is it only white men who edit Wikipedia?”, and I said, “You’ve obviously not met the Japanese Wikipedia community.”

01:44:51
It’s a joke because the broader principle still stands, who edits Japanese Wikipedia? A bunch of geeky men, and women as well. So, we do have women in the community, and that’s very important. But we do think, “Okay, you know what, that does lead to some problems,” it leads to some content issues simply because people write more about what they know and what they’re interested in. They’ll tend to be dismissive of things as being unimportant if it’s not something that they personally have an interest in. I like the example, as a parent I would say our entries on early childhood development probably aren’t as good as they should be because a lot of the Wikipedia volunteers… Actually we’re getting older, the Wikipedians, so that demographic has changed a bit. But if you’ve got a bunch of 25 year old tech geek dudes who don’t have kids, they’re just not going to be interested in early childhood development. And if they tried to write about it, they probably wouldn’t do a good job, ’cause they don’t know anything about it.

01:45:53
And somebody did a look at our entries on novelists who’ve won a major literary prize, and they looked at the male novelist versus the female, and the male novelists had longer and higher quality entries. And why is that? Well, it’s not because, ’cause I know hundreds of Wikipedian, it’s not because these are a bunch of biased, sexist men who like, “Books by women are not important.” No. Actually, there is a gender breakdown of readership. There are books, like hard science fiction’s a classic example, hard science fiction: mostly read by men. Other types of novels, more read by women. And if we don’t have women in the community, then these award-winning clearly important novelists may have less coverage. And not because anybody consciously thinks, “We don’t like a book by Maya Angelou. Who cares? She’s a poet. That’s not interesting.”

01:46:55
No, but just because, well, people write what they know, they write what they’re interested in it. So, we do think diversity in the community is really important. And that’s one area where I do think it’s really clear. But I can also say, actually that also applies in the political sphere, to say, actually, we do want kind and thoughtful Catholic priests, kind and thoughtful conservatives, kind and thoughtful libertarians, kind and thoughtful Marxists to come in. But the key is the kind and thoughtful piece, so when people sometimes come to Wikipedia outraged by some dramatic thing that’s happened on Twitter, they come to Wikipedia with a chip on their shoulder ready to do battle, and it just doesn’t work out very well.
Lex Fridman
01:47:38
And there’s tribes in general where I think there’s a responsibility on the larger group to be even kinder and more welcoming to the smaller group.
Jimmy Wales
01:47:48
Yeah, we think that’s really important. And so oftentimes, people come in and there’s a lot… When I talk about community health, one of the aspects of that that we do think about a lot, that I think about a lot is not about politics. It’s just like, how are we treating newcomers to the community? And so, I can tell you what our ideals are, what our philosophy is, but do we live up to that? So the ideal is you come to Wikipedia, we have rules. One of our fundamental rules is ignore all rules, which is partly written that way because it piques people’s attention, like, “Oh, what the hell kind of rule is that?” But basically says, “Look, don’t get nervous and depressed about a bunch of what’s the formatting of your footnote?”, so you shouldn’t come to Wikipedia, add a link, and then get banned or yelled at because it’s not the right format.

01:48:46
Instead, somebody should go, “Oh, hey. Yeah, thanks for helping, but here’s the link to how to format. If you want to keep going, you might want to learn how to format a footnote,” and to be friendly and to be open and to say, “Oh, right, oh, you’re new and you clearly don’t know everything about Wikipedia,” and sometimes in any community, that can be quite hard. So, people come in and they’ve got a great big idea, and they’re going to propose this to the Wikipedia community, and they have no idea. That’s basically a perennial discussion we’ve had 7,000 times before. And so then ideally, you would say to the person, “Oh yeah, great, thanks.” A lot of people have, and here’s where we got to and here’s the nuanced conversation we’ve had about that in the past that I think you’ll find interesting, and sometimes people are just like, “Oh God, another one, who’s come in with this idea which doesn’t work, and they don’t understand why.”
Lex Fridman
01:49:39
You can lose patience, but you shouldn’t.
Jimmy Wales
01:49:40
And that’s human, but I think it just does require really thinking in a self-aware manner of, “Oh, I was once a newbie.” Actually, I just did an interview with Emily Temple Woods, she was Wikipedian of the year, she’s just like a great, well-known Wikipedian. And I interviewed her for my book and she told me something I never knew, apparently it’s not secret, she didn’t reveal it to me, but it’s that when she started Wikipedia, she was a vandal. She came in and vandalized Wikipedia. And then basically what happened was she’d vandalized a couple of articles, and then somebody popped up on her talk page and said, “Hey, why are you doing this? We’re trying to make an encyclopedia here, and and this wasn’t very kind.”

01:50:29
And she felt so bad. She’s like, “Oh, right. I didn’t really think of it that way.” She just was coming in, and she was 13 years old, combative and having fun, and trolling a bit. And then she’s like, “Oh, actually, I see your point,” and became a great Wikipedian. So that’s the ideal really, is that you don’t just go throw a block, “Fuck off.” You go, “Hey, what gives?”, which is I think the way we tend to treat things in real life, if you’ve got somebody who’s doing something obnoxious in your friend group, you probably go, “Hey, really, I don’t know if you’ve noticed, but I think this person is actually quite hurt that you keep making that joke about them.” And then they usually go, “Oh, I thought that was okay,” and then they stop, or they keep it up and then everybody goes, “Well, you’re the asshole.”
Lex Fridman
01:51:21
Well, yeah, that’s just an example that gives me faith in humanity that we’re all capable and wanting to be kind to each other. And in general, the fact that there’s a small group of volunteers, they’re able to contribute so much to the organization, the collection, the discussion of all of human knowledge is so it makes me so grateful to be part of this whole human project. That’s one of the reasons I love Wikipedia is gives me faith in humanity.
Jimmy Wales
01:51:53
Yeah, no, I once was at Wikimania is our annual conference and people come from all around the world, really active volunteers. I was at the dinner, we were in Egypt at Wikimania and Alexandria at the closing dinner or whatever, and a friend of mine came and sat at the table, and she’s been in the movement more broadly, creative commons, she’s not really a Wikipedian, she’d come to the conference because she’s into creative commons and all that. So we have dinner, and it just turned out I sat down at the table with most of the members of the English language arbitration committee, and they’re a bunch of very sweet, geeky Wikipedians.

01:52:31
And as we left the table, I said to her, “I still find this sense of amazement, we just had dinner with some of the most powerful people in English language media,” because they’re the people who are the final court of appeal in English Wikipedia. And thank goodness they’re not media moguls. They’re just a bunch of geeks who are just well-liked in the community because they’re kind and they’re thoughtful and they really think about things. I was like, “This is great. Love Wikipedia.”
Lex Fridman
01:53:01
To the degree that geeks run the best aspect of human civilization brings me joy in all aspects. And this is true programming, like Linux programmers, people that kind of specialize in a thing, and they don’t really get caught up into the mess of the bickering of society. They just do their thing, and they value the craftsmanship of it, the competence of it.
Jimmy Wales
01:53:29
Yeah. If you’ve never heard of this or looked into it, you’ll enjoy it, I read something recently that I didn’t even know about, but the fundamental time zones, and they change from time to time. Sometimes, a country will pass daylight savings or move it by a week, whatever. There’s a file that’s on all Unix based computers, and basically all computers end up using this file, it’s the official time zone file. But why is it official? It’s just this one guy. It’s like this guy and a group of community around him.

01:54:04
And basically, something weird happened and it broke something because he was on vacation. And I’m just like, isn’t that wild that you would think… First of all, most people never even think about how do computers know about time zones? Well, they know because they just use this file which tells all the time zones and which dates they change and all of that. But there’s this one guy, and he doesn’t get paid for it. With all the billions of people on the planet, he put his hand up and goes, “Yo, I’ll take care of the time zones.”
Lex Fridman
01:54:36
And there’s a lot of programmers listening to this right now with PTSD about time zones. On top of this one guy, there’s other libraries, the different programming languages that help manage the time zones for you. But still, within those, it’s amazing just the packages, the libraries, how few people build them out of their own love for building, for creating, for community and all of that. I almost like don’t want to interfere with the natural habitat of the geek. When you spot him in the wild, you just want to be like, “Well, careful, that thing needs to be treasured.”

01:55:16
No, I met a guy many years ago, lovely, really sweet guy, and he was running a bot on English Wikipedia that I thought, “Wow, that’s actually super clever.” And what he had done is his bot was like spell checking, but rather than simple spell checking, what he had done is create a database of words that are commonly mistaken for other words. They’re spelled wrong, so I can’t even give an example. And so, the word is people often spell it wrong, but no spell checker catches it because it is another word. And so, what he did is he wrote a bot that looks for these words and then checks the sentence around it for certain keywords. So in some context, this isn’t correct, but buoy and boy: people sometimes type B-O-Y when they mean B-O-U-Y, so if he sees the word boy, B-O-Y in an article, he would look in the context and see, is this a nautical reference? And if it was, he didn’t autocorrect, he just would flag it up to himself to go, “Oh, check this one out.”

01:56:23
And that’s not a great example, but he had thousands of examples, and I was like, “That’s amazing. I would’ve never thought to do that.” And I’m glad that somebody did. And that’s also part of the openness of the system, and also I think being a charity, being this idea of actually, this is a gift to the world that makes someone go, “Oh, well, I’ll put my hand up. I see a little piece of things I can make better because I’m a good programmer and I can write this script to do this thing, and I’ll find it fun,” amazing.
Lex Fridman
01:56:55
Well, I got to ask about this big, bold decision at the very beginning to not do advertisements on the website. And just in general, the philosophy of the business model of Wikipedia, what went behind that?
Jimmy Wales
01:57:06
Yeah, so I think most people know this, but we’re a charity, so in the US, registered as a charity. And we don’t have any ads on the site. And the vast majority of the money is from donations, but the vast majority from small donors. So, people giving $25 or whatever.
Lex Fridman
01:57:29
If you’re listening to this, go donate.
Jimmy Wales
01:57:31
Go donate.
Lex Fridman
01:57:31
Donate now.
Jimmy Wales
01:57:33
$25.
Lex Fridman
01:57:33
I’ve donated so many times
Jimmy Wales
01:57:34
And we have millions of donors every year, but it’s a small percentage of people. I would say in the early days, a big part of it was aesthetic, almost as much as anything else. It was just like, “I don’t really want ads in Wikipedia. There’s a lot of reasons why it might not be good.” And even back then, I didn’t think as much as I have since about a business model can tend to drive you in a certain place, and really thinking that through in advance is really important because you might say, “Yeah, we’re really, really keen on community control and neutrality,” but if we had an advertising based business model, probably that would begin to erode. Even if I believe in it very strongly, organizations tend to follow the money in the DNA in the long run.

01:58:25
And so things like, it’s easy to think about some of the immediate problems. So if you go to read about, I don’t know, Nissan car company, and if you saw an ad for the new Nissan at the top of the page, you might be like, “Did they pay for this?”, or, “Do the advertisers have influence over the content?”, because of wonder about that for all kinds of media.
Lex Fridman
01:58:53
And that undermines trust.
Jimmy Wales
01:58:55
Undermines trust, right. But also, things like we don’t have clickbait headlines in Wikipedia. You’ve never seen Wikipedia entries with all these kind of listicles, “The 10 funniest cat pictures, number seven will make you cry,” none of that kind of stuff, because there’s no incentive, no reason to do that. Also, there’s no reason to have an algorithm to say, “Actually, we’re going to use our algorithm to drive you to stay on the website longer. We’re going to use the algorithm to drive you to…”, It’s like, “Oh, you’re reading about Queen Victoria. There’s nothing to sell you when you’re reading about Queen Victoria. Let’s move you on to Las Vegas because actually, the ad revenue around hotels in Las Vegas is quite good,” so there’s no incentive for the organization to go, “Oh, let’s move people around to things that have better ad revenue.”

01:59:48
Instead, it’s just like, “Oh, well, what’s most interesting to the community?,” just to make those links. So, that decision just seemed obvious to me, but as I say, it was less of a business decision and more of an aesthetic. It’s like, “I like Wikipedia that doesn’t have ads.” In these early days, a lot of the ads, that was well before the era of really quality ad targeting and all that, so you got a lot of-
Lex Fridman
02:00:18
Banners.
Jimmy Wales
02:00:18
Banners, punch the monkey ads and all that kind of nonsense. But there was no guarantee. It was not really clear, how could we fund this? It was pretty cheap. It still is quite cheap compared to most. We don’t have 100,000 employees and all of that, but would we be able to raise money through donations? And so, I remember the first time that we really did a donation campaign was on a Christmas Day in 2003, I think it was. We had three servers, database servers, and two front end servers, and they were all the same size or whatever, and two of them crashed. They broke, I don’t even know, remember now, the hard drive. It was Christmas Day, so I scrambled on Christmas Day to go onto the database server, which fortunately survived, and have it become a front end server as well. And then, the site was really slow and it wasn’t working very well.

02:01:28
And I was like, “Okay, it’s time. We need to do a fundraiser,” and so I was hoping to raise $20,000 in a month’s time, but we raised nearly $30,000 within two, three weeks time. So that was the first proof point of, “Oh, we put a batter up and people will donate,” we just explained we need the money. And we were very small back then, and people were like, “Oh yeah, I love this. I want to contribute.” Then over the years, we’ve become more sophisticated about the fundraising campaigns, and we’ve tested a lot of different messaging and so forth. What we used to think, I remember one year we really went heavy with, “The idea of Wikipedia is a free encyclopedia for every single person on the planet. So what about the languages of Sub-Saharan Africa?”

02:02:20
So I thought, “Okay, we’re trying to raise money. We need to talk about that because it’s really important and near and dear to my heart,” and just instinctively knowing nothing about charity fundraising, you see it all around, it’s like, oh, charity’s always mentioned the poor people they’re helping, so let’s talk about. Didn’t really work as well. This is very vague and very broad, but the pitch that works better than any other in general is a fairness pitch of, “You use it all the time, you should probably chip in.” And most people are like, “Yeah, you know what? My life would suck without Wikipedia. I use it constantly and whatever. I should chip in, it just seems like the right thing to do.”

02:03:02
And there’s many variants on that, obviously. And it works. And people are like, “Oh yeah, Wikipedia, I love Wikipedia, and I shouldn’t.” So sometimes people say, “Why are you always begging for money on the website?”, and it’s not that often, it’s not that much, but it does happen. They’re like, “Why don’t you just get Google and Facebook and Microsoft, why don’t they pay for it?”, and I’m like, “I don’t think that’s really the right answer.”
Lex Fridman
02:03:34
Influence starts to creep in.
Jimmy Wales
02:03:35
Influence starts to creep in, and questions start to creep in. The best funding for Wikipedia is the small donors. We also have major donors. We have high net worth people who donate, but we always are very careful about that sort of thing to say, “Wow, that’s really great and really important, but we can’t let that become influence because that would just be really quite not good for Wikipedia.”
Lex Fridman
02:04:01
I would love to know how many times I’ve visited Wikipedia, how much time I’ve spent on it, because I have a general sense that it’s the most useful site I’ve ever used, competing maybe with Google search, which ultimately lands on Wikipedia.
Jimmy Wales
02:04:01
Yeah, right.
Lex Fridman
02:04:20
But if I would just be reminded of like, “Hey, remember all those times your life was make better because of the site?”, I think I would be much more like, “Yeah, why did I waste money on site X, Y, Z when I should be giving a lot of it here?”
Jimmy Wales
02:04:33
Well, the Guardian newspaper has a similar model, which is they have ads. There’s no paywall, but they just encourage people to donate, and they do that. I’ve sometimes seen a banner saying, “Oh, this is your 134th article you’ve read this year, would you like to donate?” And I think it’s effective-
Lex Fridman
02:04:55
[inaudible 02:04:55].
Jimmy Wales
02:04:54
… they’re testing. But also, I wonder just for some people, if they just don’t feel like guilty and then think, “Oh, I shouldn’t bother them so much.” I don’t know. It’s a good question. I don’t know the answer.
Lex Fridman
02:05:06
I guess that’s the thing I could also turn on, ’cause that would make me… I feel like legitimately, there’s some sites, this speaks to our social media discussion: Wikipedia unquestionably makes me feel better about myself if I spend time on it. There’s some websites where I’m like, if I spend time on Twitter, sometimes I’m like, I regret. I think Elon talks about this, minimize the number of regretted minutes. My number of regretted minutes on Wikipedia is zero. I don’t remember a time… I’ve just discovered this. I started following on Instagram, a page, depthsofwikipedia.
Jimmy Wales
02:05:46
Oh, yeah.
Lex Fridman
02:05:47
There’s crazy Wikipedia pages. There’s no Wikipedia page that [inaudible 02:05:51]-
Jimmy Wales
02:05:51
Yeah, I gave her a media contributor of the year award this year because she’s so great.
Lex Fridman
02:05:55
Yeah, she’s amazing.
Jimmy Wales
02:05:57
Depthsofwikipedia is so fun.
Lex Fridman
02:05:59
Yeah, that’s the interesting point that I don’t even know if there’s a competitor. There may be the programming, Stack Overflow type of websites, but everything else, there’s always a trade-off. It’s probably because of the ad driven model because there’s an incentive to pull you into clickbait, and Wikipedia has no clickbait. It’s all about the quality of the knowledge and the wisdom.
Jimmy Wales
02:06:22
Yeah. No, that’s right. And I also Stack Overflow. Although I wonder what you think of this, so I only program for fun as a hobby, and I don’t have enough time to do it, but I do, and I’m not very good at it. So therefore, I end up on Stack Overflow quite a lot trying to figure out what’s gone wrong. And I have really transitioned to using ChatGPT much more for that because I can often find the answer clearly explained, and it works better than sifting through threads, and I feel bad about that because I do love Stack Overflow and their community. I’m assuming, I haven’t read anything about in the news about it, but I’m assuming they are keenly aware of this, and they’re thinking about, “How can we use this chunk of knowledge that we’ve got here and provide a new type of interface where you can query it with a question and actually get an answer that’s based on the answers that we’ve had?” I don’t know.
Lex Fridman
02:07:19
Mm-hmm. And I think Stack Overflow currently has policies against using GPT. There’s a contentious kind of tension.
Jimmy Wales
02:07:28
Of course, yeah.
Lex Fridman
02:07:29
But they’re trying to figure that out.
Jimmy Wales
02:07:30
Well, and so we are similar in that regard. Obviously, all the things we’ve talked about like ChatGPT makes stuff up and it makes up references, so our community has already put into place some policies about it. But roughly speaking, there’s always more nuance. But roughly speaking, it’s, you the human are responsible for what you put into Wikipedia. So, if you use ChatGPT, you better check it, ’cause there’s a lot of great use cases of like, “Oh, well, I’m not a native speaker of German, but I am pretty good,” I’m not talking about myself, a hypothetical me that’s pretty good, and I just want to run my edit through ChatGPT in German to go make sure my grammar’s okay. That’s actually cool.
Lex Fridman
02:08:15
Does it make you sad that people might use, increasingly use ChatGPT for something where they would previously use Wikipedia? So basically, use it to answer basic questions about the Eiffel Tower?
Jimmy Wales
02:08:32
Yeah. No-
Lex Fridman
02:08:32
And where the answer really comes at the source of it from Wikipedia, but they’re using this as an interface.
Jimmy Wales
02:08:38
Yeah. No, that’s completely fine. Part of it is our ethos has always been, “Here’s our gift of the world. Make something,” so if the knowledge is more accessible to people, even if they’re not coming through us, that’s fine. Now, obviously we do have certain business model concerns, and where we’ve had more conversation about this, this whole GPT thing is new, things like if you ask Alexa, “What is the Eiffel Tower?”, and she reads you the first two sentences from Wikipedia and doesn’t say it’s from Wikipedia, and they’ve recently started citing Wikipedia, then we worry, “Oh, if people don’t know they’re getting the knowledge from us, are they going to donate money? Or are they just going to think, oh, what’s Wikipedia for? I can just ask Alexa.” It’s like, well, Alexa only knows anything because she read Wikipedia. So we do think about that, but it doesn’t bother me in the sense of like, oh, I want people to always come to Wikipedia first.

02:09:33
But we had a great demo, literally just hacked together over a weekend by our head of machine learning where he did this little thing to say, you could ask any question, and he was just knocking it together, so he used OpenAI’s API just to make a demo, asked a question, “Why do ducks fly south for winter?”, which is the kind of thing you think, “Oh, I might just Google for that, or I might start looking in Wikipedia. I don’t know.” And so what he did, he asked ChatGPT, “What are some Wikipedia entries that might answer this?” Then, he grabbed those Wikipedia entries, said, “Here’s some Wikipedia entries. Answer this question based only on the information in this,” and he had pretty good results, and it prevented the making stuff up. Now, it’s just he hacked it together on a weekend, but what it made me think about was, “Oh, okay, so now we’ve got this huge body of knowledge that in many cases you’re like, oh I really I want to know about Queen Victoria. I’m just going to go read the Wikipedia entry and it’s going to take me through her life and so forth.”

02:10:44
But other times, you’ve got a specific question, and maybe we could have a better search experience where you can come to Wikipedia, ask your specific question, get your specific answer that’s from Wikipedia, including links to the articles you might want to read next. And that’s just a step forward. That’s just using new type of technology to make the extraction of information from this body of text into my brain faster and easier. So, I think that’s cool.
Lex Fridman
02:11:10
I would love to see a ChatGPT grounding into websites like Wikipedia. And the other comparable website to me will be like Wolfram Alpha for more mathematical knowledge, that kind of stuff. So, taking you to a page that is really crafted as opposed to the moment you start actually taking you to journalist websites like news websites, it starts getting a little iffy, because you’re now in a land that has a wrong incentive.
Jimmy Wales
02:11:44
Right, yeah.
Lex Fridman
02:11:45
You’re pulled in.
Jimmy Wales
02:11:45
Yeah, and you need somebody to have filtered through that and tried to knock off the rough edges. Yeah, I think that’s exactly right. And I think that kind of grounding, I think they’re working really hard on it. I think that’s really important-
Jimmy Wales
02:12:00
… is, I think they’re working really hard on it. I think that’s really important. And that actually… So if you ask me to step back and be like very business-like about our business model and where’s it going to go for us, and are we going to lose half our donations because everybody’s just going to stop coming to Wikipedia and go to ChatGPT? Well, grounding will help a lot because frankly, most questions people have, if they provide proper links, we’re going to be at the top of that, just like we are in Google. So we’re still going to get tons of recognition and tons of traffic just from… Even if it’s just the moral properness of saying, “Here’s my source.” So I think we’re going to be all right in that.
Lex Fridman
02:12:39
Yeah, in the close partnership if the model is fine-tuned, is constantly retrained that Wikipedia is one of the primary places where if you want to change what the model knows, one of the things you should do is contribute to Wikipedia or clarify Wikipedia.
Jimmy Wales
02:12:53
Yeah, yeah. No, that’s [inaudible 02:12:55].
Lex Fridman
02:12:54
Or elaborate, expand, all that kind of stuff.
Jimmy Wales
02:12:56
Yeah.
Lex Fridman
02:12:57
You mentioned all of us have controversies. I have to ask, do you find the controversy of whether you are the sole founder or the co-founder of Wikipedia ironic, absurd, interesting, important? What are your comments?
Jimmy Wales
02:13:13
I would say unimportant. Not that interesting. I mean, one of the things that people are sometimes surprised to hear me say is I actually think Larry Sanger doesn’t get enough credit for his early work in Wikipedia, even though I think co-founder’s not the right title for that. So he had a lot of impact and a lot of great work, and I disagree about a lot of things since and all that, and that’s fine. So yeah. No, to me that’s like, it’s one of these things that the media love a falling out story, so they want to make a big deal out of it, and I’m just like, yeah, no.
Lex Fridman
02:13:51
So there’s a lot of interesting engineering contributions in the early days, like you were saying, there’s debates about how to structure it, what the heck is this thing that we’re doing? And there’s important people that contributed to that.
Jimmy Wales
02:14:02
Yeah, definitely.
Lex Fridman
02:14:03
So he also, you said you’ve had some disagreements. Larry Sanger said that nobody should trust Wikipedia, and that Wikipedia seems to assume that there’s only one legitimate, defensible version of the truth on any controversial question. That’s not how Wikipedia used to be. I presume you disagree with that analysis.
Jimmy Wales
02:14:21
Yeah. I mean, just straight up, I disagree. Go and read any Wikipedia entry on a controversial topic, and what you’ll see is a really diligent effort to explain all the relevant sides. So yeah, just disagree.
Lex Fridman
02:14:32
So on controversial questions, you think perspectives are generally represented?
Jimmy Wales
02:14:36
Yeah.
Lex Fridman
02:14:37
Because it has to do with the tension between the mainstream and the non-mainstream that we were talking about.
Jimmy Wales
02:14:43
Yeah. No, I mean for sure. To take this area of discussion seriously is to say, yeah, you know what? Actually that is a big part of what Wikipedia and spend their time grappling with is to say, how do we figure out whether a less popular view is pseudoscience? Is it just a less popular view that’s gaining acceptance in the mainstream? Is it fringe versus crackpot, et cetera, et cetera? And that debate is what you’ve got to do. There’s no choice about having that debate of grappling with something. And I think we do. And I think that’s really important. And I think if anybody said to the Wikipedia community, “Gee, you should stop covering minority viewpoints on this issue,”

02:15:39
I think they would say, “I don’t even understand why you would say that. We have to grapple with minority viewpoints in science and politics and so on.” And this is one of the reasons why there is no magic simple answer to all these things. It’s really contextual. It’s case by case. It’s like you’ve got to really say, okay, what is the context here? How do you do it? And you’ve always got to be open to correction and to change and to challenge and always be sort of serious about that.
Lex Fridman
02:16:13
I think what happens, again, with social media is when there is that grappling process in Wikipedia and a decision is made to remove a paragraph or to remove a thing or to say a thing, you’re going to notice the one direction of the oscillation of the grappling and not the correction. And you’re going to highlight that and say, how come this person… I don’t know, maybe legitimacy of elections that’s the thing that comes up. Donald Trump maybe previously-
Jimmy Wales
02:16:42
Yeah, I can give a really good example, which is, there was this sort of dust up about the definition of recession in Wikipedia. The accusation was often quite ridiculous and extreme, which is, under pressure from the Biden administration Wikipedia changed the definition of recession to make Biden look good, or we did it not under pressure, but because we’re a bunch of lunatic leftists and so on. And then when I see something like that in the press, I’m like, “Oh dear, what’s happened here? How do we do that?” Because I always just accept things for five seconds first, and then I go and I look and I’m like, “You know what? That’s literally completely not what happened.” What happened was, one editor thought the article needed restructuring. So the article is always said, so the traditional kind of loose definition of recession is two quarters of negative growth, but there’s always been within economics, within important agencies and different countries around the world, a lot of nuance around that.

02:17:43
And there’s other factors that go into it and so forth. And then it’s just an interesting complicated topic. And so the article has always had the definition of two quarters. And the only thing that really changed was moving that from the lead, from the top paragraph to further down. And then news stories appeared saying, “Wikipedia has changed the definition of recession.” And then we got a huge rush of trolls coming in. So the article was temporarily protected, I think, only semi protected, and people were told, “Go to the talk page to discuss.” So anyway, it was a dust up that was… When you look at it as a Wikipedian, you’re like, “Oh, this is a really routine kind of editorial debate.” Another example, which unfortunately our friend Elon fell for, I would say, is the Twitter files. So there was an article called the Twitter files, which is about these files that were released once Elon took control of Twitter, and he released internal documents.

02:18:36
And what happened was somebody nominated it for deletion, but even the nomination said, “This is mainly about the Hunter Biden laptop controversy, shouldn’t this information be there instead?” So anyone can… It takes exactly one human being anywhere on the planet to propose something for deletion, and that triggers a process where people discuss it, which within a few hours, it was what we call snowball closed i.e, this doesn’t have a snowball’s chance in hell of passing. So an admin goes, “Yeah, wrong,” and closed the debate, and that was it. That was the whole thing that happened. And so nobody proposed suppressing the information. Nobody proposed it wasn’t important, it was just editorially boring internal questions. So sometimes people read stuff like that and they’re like, “Oh, you see, look at these leftists. They’re trying to suppress the truth again.” It’s like, well slow down a second and come and look, literally, it’s not what happened.
Lex Fridman
02:19:36
So I think the right is more sensitive to censorship, and so they will more likely highlight there’s more virality to highlighting something that looks like censorship in any walks of life. And this moving a paragraph from one place to another, or removing it and so on, as part of the regular grappling of Wikipedia can make a hell of a good article or YouTube video.
Jimmy Wales
02:20:01
Oh, yeah. Yeah. No, it sounds really in enticing and intriguing and surprising to most people because they’re like, “Oh, no, I’m reading Wikipedia. It doesn’t seem like a crackpot leftist website. It seems pretty kind of dull, really in its own geeky way.” And so that makes a good story. It’s like, oh, am I being misled? Because there’s a shadowy cabal of Jimmy Wales.
Lex Fridman
02:20:25
I generally, I read political stuff. I mentioned to you that I’m traveling to have some very difficult conversation with high profile figures both in the war in Ukraine and in Israel and Palestine. And I read the Wikipedia articles around that, and I also read books on the conflict and the history of the different regions. And I find the Wikipedia articles to be very balanced, and there’s many perspectives being represented. But then I ask myself, “Well, am I one of them leftist crackpots?” They can’t see the truth. I mean, it’s something I ask myself all the time, forget the leftist, just crackpot in general. Am I just being a sheep and accepting it? And I think that’s an important question to always ask, but not too much.
Jimmy Wales
02:21:12
Yeah. No, I agree.
Lex Fridman
02:21:12
A little bit, but not too much.
Jimmy Wales
02:21:15
Yeah. No, I think we always have to challenge ourselves of what do I potentially have wrong?
Lex Fridman
02:21:20
Well, you mentioned pressure from government. You’ve criticized Twitter for giving in to Turkey’s government censorship. There’s also conspiracy theories or accusations of Wikipedia being open to pressure from government to government organizations, FBI and all this kind of stuff. What is the philosophy about pressure from government and censorship?
Jimmy Wales
02:21:50
So we’re super hardcore on this. We’ve never bowed down to government pressure anywhere in the world, and we never will. And we understand that we’re hardcore. And actually there is a bit of nuance about how different companies respond to this, but our response has always been just to say no. And if they threaten to block, well, knock yourself out, you’re going to lose Wikipedia. And that’s been very successful for us as a strategy because governments know they can’t just casually threaten to block Wikipedia or block us for two days, and we’re going to cave in immediately to get back into the market. And that’s what a lot of companies have done. And I don’t think that’s good that we can go one level deeper and say, I’m actually quite sympathetic. If you have staff members in a certain country and they are at physical risk, you’ve got to put that into your equation.

02:22:43
So I understand that. If Elon said, “Actually, I’ve got a hundred staff members on the ground in such and such a country, and if we don’t comply, somebody’s going to get arrested. And it could be quite serious.” Okay, that’s a tough one. That’s actually really hard. But yeah, no. And then the FBI one, no, the criticism I saw. I kind of prepared for this because I saw people responding to your request for questions, and I was like, somebody’s like, “Oh, well, don’t you think it was really bad that you da da da, da?” I actually reached out to [inaudible 02:23:18] and said, “Can you just make sure I’ve got my facts right?” And the answer is, we received zero requests of any kind from the FBI or any of the other government agencies for any changes to content in Wikipedia. And had we received those requests at the level of the Wikipedia Foundation, we would’ve said, “We can’t do anything because Wikipedia is written by the community.”

02:23:40
And so the Wikimedia Foundation can’t change the content of Wikipedia without causing… I mean, God, that would be a massive controversy, you can’t even imagine. What we did do, and this is what I’ve done, I’ve been to China and met with the Minister of Propaganda. We’ve had discussions with governments all around the world, not because we want to do their bidding, but because we don’t want to do their bidding, but we also don’t want to be blocked. And we think actually having these conversations are really important. There’s no threat of being blocked in the US. That’s just never going to happen. There is the First Amendment. But in other countries around the world, it’s like, “Okay, what are you upset about? Let’s have the conversation. Let’s understand, and let’s have a dialogue about it so that you can understand where we come from and what we’re doing and why.”

02:24:26
And then sometimes it’s like, gee, if somebody complains that something’s bad in Wikipedia, whoever they are, don’t care who they are. It could be you, it could be the government, it could be the Pope. I don’t care who they are. It’s like, oh, okay. Well, our responsibility as Wikipedia is to go, “Oh, hold on, let’s check is that right or wrong? Is there something that we’ve got wrong in Wikipedia? Not because you’re threatening to block us, but because we want Wikipedia to be correct.” So we do have these dialogues with people. And a big part of what was going on with, you might call it pressure on social media companies or dialogue with, as we talked earlier, grapple with the language depending on what your view is. In our case, it was really just about, oh, okay, they want to have a dialogue about COVID information, misinformation.

02:25:22
We are this enormous source of information which the world depends on. We’re going to have that conversation. We’re happy to say, here’s… If they say, how do you know that Wikipedia is not going to be pushing some crazy anti-vax narrative first? I mean, I think it’s somewhat inappropriate for a government to be asking pointed questions in a way that implies possible penalties. I’m not sure that ever happened because we would just go, I don’t know, the Chinese blocked us. So it goes, right? We’re not going to cave into any kind of government pressure, but whatever the appropriateness of what they were doing, I think there is a rule for government in just saying, let’s understand the information ecosystem. Let’s think about the problem of misinformation, disinformation in society, particularly around election security, all these kinds of things. So I think it would be irresponsible of us to get a call from a government agency and say, “Yeah, why don’t you just fuck off? You’re the government.” But it would also be irresponsible to go, “Oh, dear, government agent’s not happy. Let’s fix Wikipedia so the FBI loves us.”
Lex Fridman
02:26:35
And when you say you want to have discussions with the Chinese government or with organizations like CDC and WHO, it’s to thoroughly understand what the mainstream narrative is so that it can be properly represented, but not drive what the articles are?
Jimmy Wales
02:26:50
Well, it’s actually important to say whatever the Wikimedia Foundation thinks has no impact on what’s in Wikipedia. So it’s more about saying to them, “We understand you’re the World Health Organization, or you’re whoever, and part of your job is to… Public health is about communications. You want to understand the world.” So it’s more about, “Well, let’s explain how Wikipedia works.”
Lex Fridman
02:27:18
So it’s more about explaining how Wikipedia works and like, “Hey, it’s the volunteers”?
Jimmy Wales
02:27:22
Yeah, exactly.
Lex Fridman
02:27:23
It’s a battle of ideas, and here’s how the sources are used.
Jimmy Wales
02:27:29
Yeah, exactly.
Lex Fridman
02:27:30
What are the legitimate sources and what not a legitimate source is.
Jimmy Wales
02:27:32
Yeah, exactly.
Lex Fridman
02:27:33
I mean, I suppose there’s some battle about what is a legitimate source. There could be statements made that CDC… There’s government organizations in general have sold themselves to be the place where you go for expertise. And some of that has been to small degree, raised in question over the response to the pandemic.
Jimmy Wales
02:27:57
Well, I think in many cases, and this goes back to my topic of trust. So there were definitely cases of public officials, public organizations where I felt like they lost the trust of the public because they didn’t trust the public. And so the idea is, we really need people to take this seriously and take actions, therefore, we’re going to put out some overblown claims because it’s going to scare people into behaving correctly. You know what? That might work for a little while, but it doesn’t work in the long run because suddenly people go from a default stance of… Like the Center for Disease Control, very well respected scientific organization. I don’t know. They’ve got fault in Atlanta with the last file of smallpox or whatever it is that people think about them. And to go, “Oh, right, these are scientists we should actually take seriously and listen to, and they’re not politicized.”

02:28:58
It’s like, okay. And if you put out statements, and I don’t know if the CDC did, but Who Health Organization, whoever, that are provably false and also provably, you kind of knew they were false, but you did it to scare people because you wanted them to do the right thing. It’s like, no, you know what? That’s not going to work in the long run. You’re going to lose people, and now you’ve got a bigger problem, which is a lack of trust in science, a lack of trust in authorities who are, by and large, they’re like quite boring government bureaucrat scientists who just are trying to help the world.
Lex Fridman
02:29:31
Well, I’ve been criticized, and I’ve been torn on this. I’ve been criticized for criticizing Anthony Fauci too hard. The degree to which I criticized him is because he’s a leader. And I’m just observing the effect in the loss of trust in the institutions like the NIH that where I personally know there’s a lot of incredible scientists doing incredible work, and I have to blame the leaders for the effects on the distrust and the scientific work that they’re doing because of what I perceive as basic human flaws of communication, of arrogance, of ego, of politics, all those kinds of things. Now, you could say, “You’re being too harsh,” possible, but I think that’s the whole point of free speech is you can criticize people who lead. Leaders, unfortunately or fortunately, are responsible for the effects on society.

02:30:28
To me, Anthony Fauci or whoever in the scientific position around the pandemic had an opportunity to have a FDR moment or to get everybody together, inspire about the power of science to rapidly develop a vaccine that saves us from this pandemic and future pandemic that can threaten the wellbeing of human civilization. This was epic and awesome and sexy. And to me, when I’m talking to people about science, it’s anything but sexy in terms of the virology and biology development because it’s been politicized. It’s icky, and people just don’t want to… “Don’t talk to me about the vaccine. I understand. I understand. I got vaccinated.” There’s just, “Let’s switch topics quick.”
Jimmy Wales
02:31:11
Yeah, yeah. Well, it’s interesting because as I say, I live in the UK and I think all these things are a little less politicized there. And I haven’t paid close enough attention to Fauci to have a really strong view. I’m sure I would disagree with some things. I remember hearing at the beginning of the pandemic as I’m unwrapping my Amazon package with these masks I bought because I heard there’s a pandemic. And I just was like, “I want some N95 mask, please.” And they were saying, “Don’t buy masks.” And the motivation was because they didn’t want there to be shortages in hospitals. Fine. But there were also statements of masks, they’re not effective and they won’t help you. And then the complete about phase two, you’re ridiculous if you’re not wearing a… It’s just like, no, that about face just lost people from day one.
Lex Fridman
02:32:06
The distrust in the intelligence of the public to deal with nuance, to deal with the uncertainty.
Jimmy Wales
02:32:11
Yeah. This is exactly what… I think this is where the Wikipedia neutral point of view is and should be in ideally. And obviously every article and everything we could… You know me now and you know how I am about these things, but ideally, it’s to say, look, we’re happy to show you all the perspectives. This is Planned Parenthood’s view, and this is Catholic Church view, and we’re going to explain that, and we’re going to try to be thoughtful and put in the best arguments from all sides, because I trust you. You read that and you’re going to be more educated and you’re going to begin to make a decision. I mean, I can just talk in the UK, the government, da, da, da. When we found out in the UK that very high level government officials were not following the rules they had put on everyone else. I had just become a UK citizen just a little while before the pandemic, and it’s kind of emotional. You get a passport in a new country and you feel quite good.

02:33:09
I did my oath to the Queen, and then they dragged the poor old lady out to tell us all to be good. I was like, “We’re British and we’re going to do the right things, and it’s going to be tough, but going to…” So you have that kind of Dunkirk spirit moment, and you’re following the rules to a T, and then suddenly it’s like, well, they’re not following the rules. And so suddenly I shifted personally from, “I’m going to follow the rules, even if I don’t completely agree with them, but I’ll still follow because I think we’ve got to all chip in together,” to, “You know what? I’m going to make wise and thoughtful decisions for myself and my family.” And that generally is going to mean following the rules. But it’s basically when they’re at certain moments in time, you’re not allowed to be in an outside space unless you’re exercising. I’m like, I think I can sit in a park and read a book. It’s going to be fine. That’s irrational rule, which I would’ve been following just personally of like, I’m just going to do the right thing.
Lex Fridman
02:34:06
And the loss of trust, I think, at scale was probably harmful to science. And to me, the scientific method and the scientific community is one of the biggest hopes, at least to me, for the survival and the thriving of human civilization.
Jimmy Wales
02:34:22
Absolutely. And I think you see some of the ramifications of this. There’s always been pretty anti-science, anti-vax people. That’s always been a thing, but I feel like it’s bigger now simply because of that lowering of trust. So a lot of people, maybe it’s like you say, a lot of people are like, “Yeah, I got vaccinated, but I really don’t want to talk about this because it’s so toxic.” And that’s unfortunate because I think people should say, “What an amazing thing.” There’s also a whole range of discourse around if this were a disease that was primarily killing babies, I think people’s emotions about it would’ve been very different, right or wrong. Then the fact that when you really looked at the death rate of getting COVID, wow, it’s really dramatically different. If you’re late in life, this was really dangerous. And if you’re 23 years old, yeah, well, it’s not great. And long COVID is a thing and all of that. And I think some of the public communications, again, were failing to properly contextualize it. Not all of it. It’s a complicated matter, but yeah.
Lex Fridman
02:35:45
Let me read you a Reddit comment that received two likes.
Jimmy Wales
02:35:48
Oh, two whole people liked it.
Lex Fridman
02:35:52
Yeah, two people liked it. And I don’t know, maybe you can comment on whether there’s truth to it, but I just found it interesting because I’ve been doing a lot of research on World War II recently. So this is about Hitler.
Jimmy Wales
02:36:06
Oh, okay.
Lex Fridman
02:36:06
It’s a long statement. “I was there when a big push was made to fight bias at Wikipedia. Our target became getting the Hitler article to be Wiki’s featured article. The idea was that the voting body only wanted articles that were good PR and especially articles about socially liberal topics. So the Hitler article had to be two to three times better and more academically researched to beat the competition. This bias seems to hold today, for example, the current list of political featured articles at a glance seems to have only two books, one on anarchism and one on Karl Marx. Surely we’re not going to say there have only ever been two articles about political non-biography books worth being featured, especially compared to 200 plus video games. And that’s the only topics with good books are socialism and anarchy.” Do you have any interesting comments on this kind of-
Jimmy Wales
02:36:06
Oh, yeah.
Lex Fridman
02:37:00
[inaudible 02:37:00] featured, how the featured is selected, maybe Hitler, because he is a special figure [inaudible 02:37:09] kind of stuff.
Jimmy Wales
02:37:09
I love that. No, I love the comparison to how many video games, and that definitely speaks to my earlier is like, if you’ve got a lot of young geeky men who really like video games, that doesn’t necessarily get you to the right place in every respect. Certainly. Yeah. So here’s a funny story. I woke up one morning to a bunch of journalists in Germany trying to get in touch with me because German language, Wikipedia chose to have as the featured article of the day, Swastika. And people were going crazy about it, and some people were saying, “It’s illegal. Has German Wikipedia been taken over by Nazi sympathizers,” and so on? And it turned out it’s not illegal, discussing the swastika. Using the swastika as a political campaign and using it in certain ways is illegal in Germany in a way that it wouldn’t be in the US because the First Amendment, but in this case, it was like actually part of the point is the swastika symbol is from other cultures as well.

02:38:17
I just thought it was interesting. I did joke to the community, I’m like, “Please don’t put the swastika on the front page without warning me because I’m going to get [inaudible 02:38:25].” It wouldn’t be me, it’s the foundation. I’m not that much on the front lines. So I would say that to put Hitler on the front page of Wikipedia, it is a special topic. And you would want to say, “Yeah, let’s be really careful that it’s really, really good before we do that,” because if we put it on the front page and it’s not good enough, that could be a problem. There’s no inherent reason. Clearly, World War II is a very popular topic in Wikipedia. It’s like, turn on the history channel. People, it’s a fascinating period of history that people are very interested in. And then on the other piece, like anarchism and Karl Marx.
Lex Fridman
02:39:05
Karl Marx. Yeah.
Jimmy Wales
02:39:06
Oh, yeah. I mean, that’s interesting. I’m surprised to hear that not more political books or topics have made it to the front page.
Lex Fridman
02:39:15
Now we’re taking this Reddit a comment.
Jimmy Wales
02:39:16
I mean, as if-
Lex Fridman
02:39:17
That’s face value.
Jimmy Wales
02:39:18
… it’s completely… But I’m trusting. So I think that’s probably is right. They probably did have the list up. No, I think that piece… The piece about how many of those featured articles have been video games, and if it’s disproportionate, I think the community should go, “Actually, what’s gone? That doesn’t seem quite right.” I mean, you can imagine that because you’re looking for an article to be on the front page of Wikipedia, you want to have a bit of diversity in it. You want it to be not always something that’s really popular that week, so I don’t know, the last couple of weeks, maybe succession, the big finale of succession might lead you think, oh, let’s put succession on the front page, that’s going to be popular. In other cases, you kind of want to pick something super obscure and quirky because people also find that interesting and fun. Yeah, I don’t know. But you don’t want it to be video games most of the time. That sounds quite bad.
Lex Fridman
02:40:17
Well, let me ask you just as somebody who’s seen the whole thing, the development of the millions of articles. Big impossible question, what’s your favorite article?
Jimmy Wales
02:40:33
My favorite article? Well, I’ve got an amusing answer, which is possibly also true. There’s an article in Wikipedia called Inherently Funny Words, and one of the reasons I love it is when it was created early in the history of Wikipedia, it kind of became like a dumping ground. People would just come by and write in any word that they thought sounded funny. And then it was nominated for deletion because somebody’s like, “This is just a dumping ground. People are putting all kinds of nonsense in.” And in that deletion debate, somebody came forward and said essentially, “Wait a second, hold on. This is actually a legitimate concept in the theory of humor and comedy. And a lot of famous comedians and humorists have written about it.” And it’s actually a legitimate topic. So then they went through and they meticulously referenced every word that was in there and threw out a bunch that weren’t.

02:41:29
And so it becomes this really interesting. And now my biggest disappointment, and it’s the right decision to make because there was no source, but it was a picture of a cow, but there was a rope around its head tying on some horns onto the cow. So it was kind of a funny looking picture. It looked like a bull with horns, but it’s just a normal milk cow. And below it, the caption said, “According to some, cow is an inherently funny word,” which is just hilarious to me, partly because the “According to some” sounds a lot like Wikipedia, but there was no source. So it went away, and I know I feel very sad about that, but I’ve always liked that. And actually the reason Depths of Wikipedia amuses me so greatly is because it does highlight really interesting obscure stuff, and you’re like, “Wow, I can’t believe somebody wrote about that in Wikipedia. It’s quite amusing.” And sometimes there’s a bit of rye humor in Wikipedia. There’s always a struggle. You’re not trying to be funny, but occasionally a little inside humor can be quite healthy.
Lex Fridman
02:42:40
Apparently words with the letter K are funny. There’s a lot of really well researched stuff on this page. It’s actually exciting. And I should mention for Depths of the Wikipedia, it’s run by Annie Rauwerda.
Jimmy Wales
02:42:56
That’s right, Annie.
Lex Fridman
02:42:57
And let me just read off some of the pages. Octopolis and Octlantis-
Jimmy Wales
02:43:05
Oh yeah, that was…
Lex Fridman
02:43:05
… are two separate non-human underwater settlements built by the gloomy octopuses in Jarvis Bay East Australia. The first settlement named Octopolis by a biologist was founded in 2009. The individual structures in Octopolis consists of borrows around a piece of human detritus believed to be scrap metal, and it goes on in this way.
Jimmy Wales
02:43:29
That’s great.
Lex Fridman
02:43:30
Satiric misspelling, least concerned species. Humans were formally assessed as a species of least concern in 2008. I think Hitchhiker’s Guide to the Galaxy would slightly disagree. And the last one, let me just say, friendship paradox is the phenomena first observed by the sociologist Scott Feld in 1991, that on average an individual’s friends have more friends than that individual.
Jimmy Wales
02:43:58
Oh, that’s really interesting.
Lex Fridman
02:43:58
That’s very lonely.
Jimmy Wales
02:44:00
That’s the kind of thing that makes you want to… It sounds implausible at first because shouldn’t everybody have on average, about the same number of friends as all their friends? So you really want to dig into the math of that and really think, oh, why would that be true?
Lex Fridman
02:44:13
And it’s one way to feel more lonely in a mathematically rigorous way. Somebody else on Reddit asks, “I would love to hear some war stories from behind the scenes.” Is there something that we haven’t mentioned that was particularly difficult in this entire journey you’re on with Wikipedia?
Jimmy Wales
02:44:32
I mean, yeah, it’s hard to say. So part of what I always say about myself is that I’m a pathological optimist, so I always think everything is fine. And so things that other people might find a struggle, I’m just like, “Oh, well, this is the thing we’re doing today.” So that’s kind of about me, and it’s actually… I’m aware of this about myself, so I do like to have a few pessimistic people around me to keep me a bit on balance. I mean, I would say some of the hard things, I mean, there were hard moments like when two…
Jimmy Wales
02:45:00
I would say some of the hard things. I mean, there were hard moments when two out of three servers crashed on Christmas Day and then we needed to do a fundraiser and no idea what was going to happen. I would say as well, in that early period of time, the growth of the website and the traffic to the website was phenomenal and great. The growth of the community and in fact the healthy growth of the community was fine.

02:45:29
And then the Wikimedia Foundation, the nonprofit I set up to own and operate Wikipedia as a small organization, it had a lot of growing pains. That was the piece that’s just many companies or many organizations that are in a fast growth. It’s like you’ve hired the wrong people, or there’s this conflict that’s arisen and nobody has got experience to do this and all that. So, no specific stories to tell, but I would say growing the organization was harder than growing the community and growing the website, which is interesting.
Lex Fridman
02:46:02
Well, yeah. It’s kind of miraculous and inspiring that a community can emerge and be stable, and that has so much kind of productive, positive output. Kind of makes you think. It’s one of those things you don’t want to analyze too much because you don’t want to mess with a beautiful thing, but it gives me faith in communities. I think that they can spring up in other domains as well.
Jimmy Wales
02:46:29
Yeah, I think that’s exactly right. At Fandom, my for-profit wiki company where it’s all these communities about pop culture mainly, sort of entertainment, gaming and so on, there’s a lot of small communities. So, I went last year to our Community Connect conference and just met some of these people, and here’s one of the leaders of the Star Wars wiki, which is called Wookieepedia, which I think is great. And he’s telling me about his community and all that. And I’m like, “Oh, right. Yeah, I love this.”

02:47:03
So, it’s not the same purpose as Wikipedia of a neutral, high quality encyclopedia, but a lot of the same values are there of like, “Oh, people should be nice to each other.” It’s like when people get upset, just remember we’re working on Star Wars wiki together, there’s no reason to get too outraged. And just kind people just, just geeky people with a hobby.
Lex Fridman
02:47:27
Where do you see Wikipedia in 10 years, 100 years, and 1,000 years?
Jimmy Wales
02:47:35
Right. So, 10 years, I would say pretty much the same. We’re not going to become TikTok with entertainment deals, scroll by video humor, and blah-blah-blah, and encyclopedia. I think in 10 years, we probably will have a lot more AI supporting tools like I’ve talked about, and probably your search experience would be you can ask a question and get the answer rather than from our body of work.
Lex Fridman
02:48:09
So, search and discovery, a little bit improved, interface, some of that.
Jimmy Wales
02:48:12
Yeah, all that. I always say one of the things that most people won’t notice, because already they don’t notice it, is the growth of Wikipedia in the languages of the developing world. So, you probably don’t speak Swahili, so you’re probably not checking out that Swahili Wikipedia is doing very well, and it is doing very well. And I think that kind of growth is actually super important. It’s super interesting, but most people won’t notice that.
Lex Fridman
02:48:41
If we can just link on that if we could, do you think there’s so much incredible translation work is being done with AI, with language models? Do you think that can accelerate Wikipedia?
Jimmy Wales
02:48:55
Yeah, I do.
Lex Fridman
02:48:55
So, you start with the basic draft of the translation of articles and then build on top of that.
Jimmy Wales
02:49:00
What I used to say is machine translation for many years wasn’t much used to the community, because it just wasn’t good enough. As it’s gotten better, it’s tended to be a lot better in what we might call economically important languages, that’s because the corpus that they train on and all of that.

02:49:20
So, to translate from English to Spanish, if you’ve tried Google Translate recently Spanish to English is what I would do, it’s pretty good. It’s actually not bad. It used to be half a joke and then for a while it was kind of like, “Well, you can get the gist of something.” And now, actually, it’s pretty good. However, we’ve got a huge Spanish community who write in native Spanish, so they’re able to use it and they find it useful, but they’re writing.

02:49:44
But if you tried to do English to Zulu where there’s not that much investment, there’s loads of reasons to invest in English-Spanish, because they’re both huge, economically important languages. Zulu not so much. So, for those smaller languages, it was just still terrible. My understanding is it’s improved dramatically and also because the new methods of training don’t necessarily involve identical corpuses to try to match things up, but rather reading and understanding with tokens and large language models, and then reading and understanding, and then you get a much richer …

02:50:22
Anyway, apparently it’s quite improved, so I think that now, it is quite possible that these smaller language communities are going to say, “Oh, well finally, I can put something in an English and I can get out Zulu that I feel comfortable sharing with my community because it’s actually good enough, or I can edit it a bit here and there.” So, I think that’s huge. So, I do think that’s going to happen a lot and that’s going to accelerate, again, what will remain to most people an invisible trend, but that’s the growth in all these other languages. So, then move on to 100 years.
Lex Fridman
02:50:52
I was starting to get scary.
Jimmy Wales
02:50:54
Well, the only thing I’d say about 100 years is we’ve built the Wikimedia Foundation, and we run it in a quite cautious, and financially conservative, and careful way. So, every year, we build our reserves. Every year, we put aside a little bit of more money. We also have the endowment fund, which we just passed 100 million, that’s a completely separate fund with a separate board. So, it’s not just a big fat bank account for some future profligate CEO to blow through. The foundation will have to get the approval of a second order board to be able to access that money, and that board can make other grants through the community and things like that.

02:51:38
So, the point of all that is I hope and believe that we are building in a financially stable way that we can weather various storms along the way, so that hopefully we’re not taking the kind of risks. And by the way, we’re not taking too few risks either. That’s always hard. I think the Wikimedia Foundation and Wikipedia will exist in 100 years if anybody exists in 100 years, we’ll be there.
Lex Fridman
02:52:06
Do you think the internet just looks a predictably different, just the web?
Jimmy Wales
02:52:11
I do. I think right now, this sort of enormous step forward we’ve seen and has become public in the last year of the large language models really is something else. It’s really interesting. You and I have both talked today about the flaws and the limitations, but still it’s … As someone who’s been around technology for a long time, it’s sort of that feeling of the first time I saw a web browser, the first time I saw the iPhone, the first time the internet was really usable on a phone. And it’s like, “Wow, that’s a step change difference.” There’s a few other …
Lex Fridman
02:52:48
Maybe a Google Search.
Jimmy Wales
02:52:49
Google Search was actually one.
Lex Fridman
02:52:51
I remember the first Search.
Jimmy Wales
02:52:51
Because I remember Alta Vista was kind of cool for a while, then it just got more and more useless, because the algorithm wasn’t good. And it’s like, “Oh, Google Search, now I like the internet, it works again.” And so, large language model, it feels like that to me. Like, “Oh, wow, this is something new and really pretty remarkable.” And it’s going to have some downsides. The negative use case …

02:53:14
People in the area who are experts, they’re giving a lot of warnings. I’m not that worried, but I’m a pathological optimist. But I do see some really low-hanging fruit bad things that can happen. My example is, how about some highly customized spam where the email that you receive isn’t just misspelled words and trying to get through filters, but actually as a targeted email to you that knows something about you by reading your LinkedIn profile and writes a plausible email that will get through the filters. And it’s like suddenly, “Oh, that’s a new problem. That’s going to be interesting.”
Lex Fridman
02:53:55
Just on the Wikipedia editing side, does it make the job of the volunteer of the editor more difficult in a world where larger and larger percentage of the internet is written by an LLM?
Jimmy Wales
02:54:08
One of my predictions, and we’ll see, ask me again in five years how this panned out, is that in a way, this will strengthen the value and importance of some traditional brands. So, if I see a news story and it’s from the Wall Street Journal, from the New York Times, from Fox News, I know what I’m getting and I trust it to whatever extent I might have, trust or distrust in any of those.

02:54:43
And if I see a brand new website that looks plausible, but I’ve never heard of it, and it could be machine generated content that may be full of errors, I think I’ll be more cautious. I think I’m more interested. And we can also talk about this around photographic evidence. So, obviously, there will be scandals where major media organizations get fooled by a fake photo.

02:55:04
However, if I see a photo of the recent ones, the Pope wearing an expensive puffer jacket, I’m going to go, “Yeah, that’s amazing that a fake like that could be generated.” But my immediate thought is not, “Oh, so the Pope is dipping into the money, eh? Partly because this particular Pope doesn’t seem like he’d be the type.”
Lex Fridman
02:55:25
My favorite is extensive pictures of Joe Biden and Donald Trump hanging out and having fun together.
Jimmy Wales
02:55:31
Yeah. Brilliant. So, I think people will care about the provenance of a photo. And if you show me a photo and you say, “Yeah, this photo is from Fox News,” even though I don’t necessarily think that’s the highest, but I’m like, “Wow, it’s a news organization and they’re going to have journalism, and they’re going to make sure the photo is what it purports to be.”

02:55:55
That’s very different from a photo randomly circulating on Twitter. Whereas I would say, 15 years ago, a photo randomly circulating on Twitter, in most cases, the worst you could do, and this did happen, is misrepresent the battlefield. So, like, “Oh, here’s a bunch of injured children. Look what Israel has done.” But actually, it wasn’t Israel, it was another case 10 years ago. That has happened, that has always been around. But now, we can have much more specifically constructed, plausible looking photos that if I just see them circulating on Twitter, I’m going to go, “I just don’t know. Not sure. I can make that in five minutes.”
Lex Fridman
02:56:32
Well, I also hope that it’s kind of like what you’re writing about in your book that we could also have citizen journalists that have a stable, verifiable trust that builds up. So, it doesn’t have to be in New York Times with this organization that you could be in an organization of one as long as it’s stable and carries through time and it builds up or it goes up.
Jimmy Wales
02:56:52
No, I agree. But the one thing I’ve said in the past, and this depends on who that person is and what they’re doing, but it’s like I think my credibility, my general credibility in the world should be the equal of a New York Times reporter. So, if something happens, and I witness it, and I write about it, people are going to go, “Well, Jimmy Wales said it. That’s just like if a New York Times reporter said it. I’m going to tend to think he didn’t just make it up.”

02:57:18
The truth is nothing interesting ever happens around me. I don’t go to war zones. I don’t go to big press conferences. I don’t interview Putin and Zelenskyy. To an extent, yes. Whereas I do think for other people, those traditional models of credibility are really, really important. And then there is this sort of citizen journalism. I don’t know if you think of what you do as journalism. I kind of think it is, but you do interviews, you do long form interviews.

02:57:49
If you come and you say, “Here’s my tape,” but you wouldn’t hand out a tape. I just gesture to you as if I’m handing you a cassette tape. But if you put it into your podcast, ” Here’s my interview with Zelenskyy.” And people aren’t going to go, “Yeah, how do we know? That could be a deep fake. You could have faked that.” Because people are like, “Well, no, you’re a well known podcaster and you do interview interesting people. Yeah, you wouldn’t think that.” So, that your brand becomes really important.

02:58:19
Whereas if suddenly, and I’ve seen this already, I’ve seen sort of video with subtitles in English, and apparently the Ukrainian was the same and it was Zelenskyy saying something really outrageous. And I’m like, “Yeah, I don’t believe that. I don’t think he said that in a meeting with whatever. I think that’s Russian propaganda or probably just trolls.”
Lex Fridman
02:58:42
Yeah. And then building platforms and mechanisms of how that trust can be verified. If something appears on a Wikipedia page, that means something. If something appears on my Twitter account, that means something. That means I, this particular human, have signed off on it.
Jimmy Wales
02:58:58
Yeah, exactly.
Lex Fridman
02:58:58
And then the trust you have in this particular human transfers to the piece of content. Hopefully, there’s millions of people with different metrics of trust. And then you could see that there’s a certain kind of bias in the set of conversations you’re having. So, maybe okay, I trust this person, I have this kind of bias and I’ll go to this other person with this other kind of bias and I can integrate them in this kind of way. Just like you said with Fox News and whatever [inaudible 02:59:24].
Jimmy Wales
02:59:23
Yeah. Wall Street Journal, New York Times, they’ve all got where they sit. Yeah.
Lex Fridman
02:59:29
So, you have built, I would say, one of if not the most impactful website in the history of human civilization. So, let me ask for you to give advice to young people how to have impact in this world. High schoolers, college students wanting to have a big positive impact on the world.
Jimmy Wales
02:59:50
Yeah, great. If you want to be successful, do something you’re really passionate about rather than some kind of cold calculation of what can make you the most money. Because if you go and try to do something and you’re like, “I’m not that interested, but I’m going to make a lot of money doing it,” you’re probably not going to be that good at it. And so, that is a big piece of it.

03:00:12
For startups, I give this advice. And this is a career startup, any kind of young person just starting out is be persistent. There will be moments when it’s not working out and you can’t just give up too easily. You’ve got to persist through some hard times. Maybe two servers crash on a Sunday, and you’ve got to scramble to figure it out, but persist through that, and then also be prepared to pivot. That’s a newer word, new for me. But when I pivoted from Nupedia to Wikipedia it’s like, “This isn’t working. I’ve got to completely change.” So, be willing to completely change direction when something is not working.

03:00:54
Now, the problem with these two wonderful pieces of advice is, which situation am I in today? Is this a moment when I need to just power through and persist because I’m going to find a way to make this work? Or is this a moment where I need to go, “Actually, this is totally not working and I need to change direction?” But also, I think for me, that always gives me a framework of like, “Okay, here’s the problem. Do we need to change direction, or do we need to power through it?” And just knowing those are the choices. Not always the only choices, but those choices.

03:01:27
I think it can be helpful to say, “Okay, am I chickening out because I’m having a little bump, and I’m feeling unemotional, and I’m just going to give up too soon?” Ask yourself that question. And also, it’s like, “Am I being pigheaded and trying to do something that actually doesn’t make sense?” Okay. Ask yourself that question too. Even though they’re contradictory questions, sometimes it will be one, sometimes it will be the other, and you got to really think it through.
Lex Fridman
03:01:53
I think persisting with the business model behind Wikipedia is such an inspiring story, because we live in a capitalist world. We live in a scary world, I think, for an internet business. And so, to do things differently than a lot of websites are doing, what Wikipedia has lived through this excessive explosion of many websites that are basically ad driven. Google is ad driven. Facebook, Twitter, all of these websites are ad driven. And to see them succeed, become these incredibly rich, powerful companies that if I could just have that money, you would think as somebody running Wikipedia, “I could do so much positive stuff.” And so, to persist through that is … I think from my perspective now, Monday night quarterback or whatever was the right decision, but boy is that a tough decision.
Jimmy Wales
03:02:56
It seemed easy at the time.
Lex Fridman
03:02:58
And then you just kind of stay with it. Stick with it.
Jimmy Wales
03:03:00
Yeah, just stay with it. It’s working.
Lex Fridman
03:03:01
So now, when you chose persistent.
Jimmy Wales
03:03:06
Yeah. I always like to give an example of MySpace, because I just think it’s an amusing story. MySpace was poised, I would say, to be Facebook. It was huge. It was viral, it was lots of things. Kind of foreshadowed a bit of maybe even TikTok because it was a lot of entertainment, content, casual. And then Rupert Murdoch bought it and it collapsed within a few years. And part of that I think was because they were really, really heavy on ads and less heavy on the customer experience.

03:03:40
So, I remember, to accept a friend request was like three clicks where you saw three ads. And on Facebook, you accept the friend request, you didn’t even leave the page, like that’s just accepted. So, I used to give this example of like, “Yeah, well, Rupert Murdoch really screwed that one up.” And in a sense, maybe he did, but somebody said, “You know what, actually, he bought it for …” And I don’t remember the numbers he bought it for, 800 million, and it was very profitable through its decline. He actually made his money back and more. From a financial point of view, it was a bad investment in the sense of you could have been Facebook. But on more mundane metrics, it’s like, “Actually, it worked out for him.”
Lex Fridman
03:04:18
It all matters how you define success.
Jimmy Wales
03:04:20
It does. That is also advice to young people. One of the things I would say when we have our mental models of success as an entrepreneur, for example, and your examples in your mind are Bill Gates, Mark Zuckerberg. So, people who at a very young age had one really great idea that just went straight to the moon and it became one of the richest people in the world. That is really unusual, like really, really rare.

03:04:52
And for most entrepreneurs, that is not a life path you’re going to take. You’re going to fail, you’re going to reboot, you’re going to learn from what you failed at. You’re going to try something different. And that is really important, because if your standard of success is, “Well, I feel sad because I’m not as rich as Elon Musk.” It’s like, “Well, so should almost everyone, possibly everyone except Elon Musk is not as rich as Elon Musk.”

03:05:17
Realistically, you can set a standard of success. Even in a really narrow sense, which I don’t recommend of thinking about your financial success. It’s like if you measure your financial success by thinking about billionaires, that’s heavy. That’s probably not good. I don’t recommend it.

03:05:40
Personally, for me, when journalists say, “Oh, how does it feel to not be a billionaire?” I usually say, “I don’t know how does it feel to you.” Because they’re not. But also, I live in London. The number of bankers that no one has ever heard of who live in London, who make far more money than I ever will is quite a large number, and I wouldn’t trade my life for theirs at all, because mine is so interesting.

03:06:07
“Oh, right, Jimmy, we need you to go and meet the Chinese propaganda minister.” “Oh, okay. That’s super interesting.” Like, “Yeah, Jimmy, here’s the situation. You can go to this country. And why you’re there, the President has asked to see you.” It’s like, “God, that’s super interesting.” “Jimmy, you’re going to this place and there’s a local Wikipedia who said, ‘Do you want to stay with me and my family?'” And I’m like, “Yeah, that’s really cool. I would like to do that. That’s really interesting.” I don’t do that all the time, but I’ve done it and it’s great. So, for me, that’s arranging your life so that you have interesting experiences. It’s just great.
Lex Fridman
03:06:50
This is more to the question of what Wikipedia looks like in 1,000 years. What do you think is the meaning of this whole thing? Why are we here, human civilization? What’s the meaning of life?
Jimmy Wales
03:07:00
Yeah. I don’t think there is external answer to that question.
Lex Fridman
03:07:05
And I should mention that there’s a very good Wikipedia page on the different philosophies in the meaning of life.
Jimmy Wales
03:07:11
Oh, interesting. I have to read that and see what I think. Hopefully, it’s actually neutral and gives a wide range …
Lex Fridman
03:07:16
Oh, it’s a really good reference to a lot of different philosophies about meaning. The 20th century philosophy in general, from Nietzsche to the existentialist, to Simone de Beauvoir, all of them have an idea of meaning. They really struggle with it systematically, rigorously, and that’s what the page … And obviously, a shout-out to the Hitchhiker’s Guide and all that kind of stuff.
Jimmy Wales
03:07:37
Yeah. I think there’s no external answer to that. I think it’s internal. I think we decide what meaning we will have in our lives and what we’re going to do with ourselves. If we’re talking about 1,000 years, millions of years, Yuri Milner wrote a book. He’s a big internet investor guy. He wrote a book advocating quite strongly for humans exploring the universe, and getting off the planet. And he funds projects to using lasers to send little cameras, and interesting stuff. And he talks a lot in the book about meaning. His view is that the purpose of the human species is to broadly survive and get off the planet.

03:08:31
Well, I don’t agree with everything he has to say, because I think that’s not a meaning that can motivate most people in their own lives. It’s like, “Okay, great.” The distances of space are absolutely enormous, so I don’t know. Shall we build generation ships to start flying places? I can’t do that. Even if I’m Elon Musk and I could devote all my wealth to build, I’ll be dead on the ship on the way. So, is that really a meaning?

03:08:57
But I think it’s really interesting to think about. And reading his little book, it’s quite a short little book. Reading his book, it did make me think about, “Wow, this is big. This is not what you think about in your day-to-day life. Where is the human species going to be in 10 million years?” And it does make you sort of turn back to Earth and say, “Gee, let’s not destroy the planet. We’re stuck here for at least a while, and therefore we should really think about sustainability.” I mean, one million year sustainability.

03:09:37
And we don’t have all the answers. We have nothing close to the answers. I’m actually excited about AI in this regard, while also bracketing, yeah, I understand there’s also risks and people are terrified of AI. But I actually think it is quite interesting this moment in time that we may have in the next 50 years to really, really solve some really long-term human problems, for example, in health. The progress that’s being made in cancer treatment, because we are able to at scale model molecules, and genetics, and things like this, it gets huge. It’s really exciting. So, if we can hang on for a little while, and certain problems that seem completely intractable today, like climate change may end up being actually not that hard.
Lex Fridman
03:10:30
And we just might be able to alleviate the full diversity of human suffering.
Jimmy Wales
03:10:35
For sure. Yeah.
Lex Fridman
03:10:37
In so doing, help increase the chance that we can propagate the flame of human consciousness out towards the stars. And I think another important one, if we fail to do that. For me, it’s propagating, maintaining the full diversity, and richness, and complexity, and expansiveness of human knowledge. So, if we destroy ourselves, it would make me feel a little bit okay if the human knowledge persists.
Jimmy Wales
03:11:09
It just triggered me to say something really interesting, which is when we talked earlier about translating and using machines to translate, we mostly talked about small languages and translating into English, but I always like to tell this story of something inconsequential, really.

03:11:28
I was in Norway, in Bergen, Norway, where every year they’ve got this annual festival called [foreign language 03:11:33], which is young groups drumming, and they have a drumming competition. It’s the 17 sectors of the city, and they’ve been doing it for a couple hundred years or whatever. They wrote about it in the three languages of Norway. And then from there, it was translated into English, into German, et cetera, et cetera.

03:11:53
And so, what I love about that story is what it reminds me is this machine translation goes both ways. And when you talk about the richness and broadness of human culture, we’re already seeing some really great pieces of this. So, like Korean soap operas, really popular, not with me, but with people.

03:12:17
Imagine taking a very famous, very popular, very well known Korean drama. I literally mean now, we’re just about there technologically where we use a machine to redub it in English in an automated way, including digitally editing the faces so it doesn’t look dubbed. And so, suddenly you say, “Oh, wow, here’s a piece of …” It’s the Korean equivalent of maybe it’s Friends as a comedy, or maybe it’s Succession, just to be very contemporary. It’s something that really impacted a lot of people, and they really loved it, and we have literally no idea what it’s about. And suddenly, it’s like, “Wow.” Music, street music from wherever in the world can suddenly become accessible to us all in new ways. It’s so cool.
Lex Fridman
03:13:09
It’s really exciting to get access to the richness of culture in China, in the many different subcultures of Africa, South America.
Jimmy Wales
03:13:19
One of my unsuccessful arguments with the Chinese government is by blocking Wikipedia, you aren’t just stopping people in China from reading Chinese Wikipedia and other language versions of Wikipedia, you’re also preventing the Chinese people from telling their story. So, is there a small festival in a small town in China like [foreign language 03:13:41]? I don’t know. But by the way, the people who live in that village, that small town of 50,000, they can’t put that in Wikipedia and get it translated into other places. They can’t share their culture and their knowledge.

03:13:54
And I think for China, this should be a somewhat influential argument, because China does feel misunderstood in the world. And it’s like, “Okay, well, there’s one way. If you want to help people understand, put it in Wikipedia. That’s what people go to when they want to understand.”
Lex Fridman
03:14:08
And give the amazing, incredible people of China a voice.
Jimmy Wales
03:14:13
Exactly.
Lex Fridman
03:14:14
Jimmy, I thank you so much. I’m such a huge fan of everything you’ve done.
Jimmy Wales
03:14:18
Oh, thank you. That’s really great.
Lex Fridman
03:14:18
I keep saying Wikipedia. I’m deeply, deeply, deeply, deeply grateful for Wikipedia. I love it. It brings me joy. I donate all the time. You should donate too. It’s a huge honor to finally talk with you, and this is just amazing. Thank you so much for today.
Jimmy Wales
03:14:31
Thanks for having me.
Lex Fridman
03:14:33
Thanks for listening to this conversation with Jimmy Wales. To support this podcast, please check out our sponsors in the description. And now, let me leave you with some words from the world historian, Daniel Boorstin. The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge. Thank you for listening, and hope to see you next time.

#292 – Robin Hanson: Alien Civilizations, UFOs, and the Future of Humanity

Robin Hanson is a professor at George Mason University and researcher at Future of Humanity Institute at Oxford. Please support this podcast by checking out our sponsors:
Lambda: https://lambdalabs.com/lex
Audible: https://audible.com/lex
BiOptimizers: http://www.magbreakthrough.com/lex to get 10% off
BetterHelp: https://betterhelp.com/lex to get 10% off
ExpressVPN: https://expressvpn.com/lexpod and use code LexPod to get 3 months free

EPISODE LINKS:
Robin’s Twitter: https://twitter.com/robinhanson
Robin’s Website: https://mason.gmu.edu/~rhanson
Grabby Aliens (paper): https://grabbyaliens.com/paper
The Elephant in the Brain (book): https://amazon.com/dp/0197551955/ref=nosim?tag=turingmachi08-20
The Age of Em (book): https://amazon.com/dp/0198817827/ref=nosim?tag=turingmachi08-20

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
YouTube Full Episodes: https://youtube.com/lexfridman
YouTube Clips: https://youtube.com/lexclips

SUPPORT & CONNECT:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: https://www.patreon.com/lexfridman
– Twitter: https://twitter.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Medium: https://medium.com/@lexfridman

OUTLINE:
Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) – Introduction
(06:49) – Grabby aliens
(44:33) – War and competition
(50:07) – Global government
(1:02:58) – Humanity’s future
(1:13:00) – Hello aliens
(1:40:03) – UFO sightings
(2:04:40) – Conspiracy theories
(2:12:58) – Elephant in the brain
(2:26:29) – Medicine
(2:38:58) – Institutions
(3:05:52) – Physics
(3:10:43) – Artificial intelligence
(3:28:32) – Economics
(3:31:53) – Political science
(3:37:42) – Advice for young people
(3:46:33) – Darkest moments
(3:49:34) – Love and loss
(3:58:57) – Immortality
(4:02:53) – Simulation hypothesis
(4:13:10) – Meaning of life

#291 – Jonathan Haidt: The Case Against Social Media

Jonathan Haidt is a social psychologist at NYU and author of The Coddling of the American Mind, The Righteous Mind, and The Happiness Hypothesis. Please support this podcast by checking out our sponsors:
Uncruise: https://uncruise.com/pages/lex
Notion: https://notion.com/startups to get up to $1000 off team plan
Blinkist: https://blinkist.com/lex and use code LEX to get 25% off premium
Magic Spoon: https://magicspoon.com/lex and use code LEX to get $5 off
Eight Sleep: https://www.eightsleep.com/lex and use code LEX to get special savings

EPISODE LINKS:
Jonathan’s Twitter: https://twitter.com/JonHaidt
Jonathan’s Website: https://jonathanhaidt.com
Documents & Articles:
1. Social Media and Political Dysfunction: A Collaborative Review: https://docs.google.com/document/d/1vVAtMCQnz8WVxtSNQev_e1cGmY9rnY96ecYuAj6C548/edit
2. Teen Mental Health Testimony: https://www.judiciary.senate.gov/imo/media/doc/Haidt%20Testimony.pdf
3. The Atlantic article: https://www.theatlantic.com/magazine/archive/2022/05/social-media-democracy-trust-babel/629369/
Books:
1. The Coddling of the American Mind (book): https://amzn.to/3MW4HqL
2. The Righteous Mind (book): https://amzn.to/3to0tkj
3. The Happiness Hypothesis (book): https://amzn.to/3Mb1xP2

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
YouTube Full Episodes: https://youtube.com/lexfridman
YouTube Clips: https://youtube.com/lexclips

SUPPORT & CONNECT:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: https://www.patreon.com/lexfridman
– Twitter: https://twitter.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Medium: https://medium.com/@lexfridman

OUTLINE:
Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) – Introduction
(07:57) – Social media and mental health
(21:45) – Mark Zuckerberg
(31:23) – Children’s use of social media
(42:08) – Social media and democracy
(58:14) – Elon Musk and Twitter
(1:14:45) – Anonymity on social media
(1:20:44) – Misinformation
(1:27:38) – Social media benefits
(1:30:22) – Political division on social media
(1:36:54) – Future of social media
(1:42:46) – Advice for young people

#290 – Dan Reynolds: Imagine Dragons

Dan Reynolds is the lead singer of Imagine Dragons, one of the most popular bands in the world. Please support this podcast by checking out our sponsors:
Brave: https://brave.com/lex
Mizzen+Main: https://mizzenandmain.com and use code LEX to get $35 off
Athletic Greens: https://athleticgreens.com/lex and use code LEX to get 1 month of fish oil
Indeed: https://indeed.com/lex to get $75 credit
Grammarly: https://grammarly.com/lex to get 20% off premium

EPISODE LINKS:
Dan’s Twitter: https://twitter.com/DanReynolds
Dan’s Instagram: https://instagram.com/danreynolds
Imagine Dragons Website: https://imaginedragonsmusic.com/

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
YouTube Full Episodes: https://youtube.com/lexfridman
YouTube Clips: https://youtube.com/lexclips

SUPPORT & CONNECT:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: https://www.patreon.com/lexfridman
– Twitter: https://twitter.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Medium: https://medium.com/@lexfridman

OUTLINE:
Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) – Introduction
(08:29) – Programming
(27:30) – Johnny Depp and Amber Heard
(32:23) – Las Vegas
(37:25) – Spirituality
(40:38) – Ayahuasca
(50:56) – Depression and fame
(54:40) – Introvert
(1:07:19) – Advice from Charlie Sheen
(1:19:58) – Making music
(1:32:33) – Lesson from Rick Rubin
(1:38:34) – Believer
(1:46:05) – Father son relationship
(1:47:22) – Dan’s first song
(1:51:34) – Cat Stevens and Harry Chapin
(1:56:17) – Advice for young people
(2:04:49) – LGBTQ
(2:09:11) – Religion
(2:13:53) – Meaning of life
(2:17:02) – Dan sings

#289 – Stephen Kotkin: Putin, Zelenskyy, and War in Ukraine

Stephen Kotkin is a historian specializing in Stalin and Soviet history. Please support this podcast by checking out our sponsors:
Lambda: https://lambdalabs.com/lex
Scale: https://scale.com/lex
Athletic Greens: https://athleticgreens.com/lex and use code LEX to get 1 month of fish oil
ExpressVPN: https://expressvpn.com/lexpod and use code LexPod to get 3 months free
ROKA: https://roka.com/ and use code LEX to get 20% off your first order

EPISODE LINKS:
Stephen’s Website: https://history.princeton.edu/people/stephen-kotkin
Stalin: 1878-1928 (Vol 1): https://amzn.to/3NvokpC
Stalin: 1929-1941 (Vol 2): https://amzn.to/3wIYqsT

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
YouTube Full Episodes: https://youtube.com/lexfridman
YouTube Clips: https://youtube.com/lexclips

SUPPORT & CONNECT:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: https://www.patreon.com/lexfridman
– Twitter: https://twitter.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Medium: https://medium.com/@lexfridman

OUTLINE:
Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) – Introduction
(10:17) – Putin and Stalin
(21:07) – Putin vs the West
(43:59) – Response to Oliver Stone
(55:05) – Russian invasion of Ukraine
(1:34:33) – Putin’s plan for the war
(1:42:32) – Henry Kissinger
(1:48:26) – Nuclear war
(1:59:00) – Parallels to World War II
(2:21:45) – China
(2:29:54) – World War III
(2:37:23) – Navalny
(2:41:40) – Meaning of life

#288 – Sarma Melngailis: Bad Vegan

Sarma Melngailis is a chef and restauranteur who was the subject of the Netflix documentary Bad Vegan: Fame, Fraud, Fugitives. Please support this podcast by checking out our sponsors:
Mailgun: https://lexfridman.com/mailgun
BiOptimizers: http://www.magbreakthrough.com/lex to get 10% off
Notion: https://notion.com/startups to get up to $1000 off team plan
BetterHelp: https://betterhelp.com/lex to get 10% off
Eight Sleep: https://www.eightsleep.com/lex and use code LEX to get special savings

EPISODE LINKS:
Sarma’s Twitter: https://twitter.com/sarma
Sarma’s Instagram: https://instagram.com/sarmamelngailis
Sarma’s Website: https://sarmaraw.com
Fear and Loathing in Las Vegas (book): https://amzn.to/3G9pMvs
Party of One (book): https://amzn.to/3NtcH2n
Beautiful Ruins (book): https://amzn.to/38Cfgkc
Darkness Visible (book): https://amzn.to/3tdxoYL
Projections (book): https://amzn.to/38DrRDJ
Confessions of a Sociopath (book): https://amzn.to/3sM39Ys
On Love (book): https://amzn.to/3sS3Orj
Dear Reader (book): https://amzn.to/39JPE4M

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
YouTube Full Episodes: https://youtube.com/lexfridman
YouTube Clips: https://youtube.com/lexclips

SUPPORT & CONNECT:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: https://www.patreon.com/lexfridman
– Twitter: https://twitter.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Medium: https://medium.com/@lexfridman

OUTLINE:
Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) – Introduction
(07:44) – Childhood
(12:46) – Films
(23:17) – Gifts
(32:20) – Favorite food creations
(38:31) – Leon: The Pitbull
(50:39) – Bad Vegan
(1:02:41) – Abusive relationship
(1:07:24) – Remorse for employees
(1:13:19) – Sociopathy
(1:23:44) – How Sarma met Anthony Strangis
(1:45:43) – Retrospection
(1:58:24) – Johnny Depp and Amber Heard
(2:06:17) – Is Anthony Strangis a sociopath?
(2:10:24) – What Bad Vegan got wrong
(2:27:40) – Darkest personal discovery
(2:38:22) – Road trip from hell
(2:45:36) – Wild stories
(2:54:33) – Prison
(3:04:13) – Ghislaine Maxwell
(3:19:44) – Running restaurants
(3:33:26) – Last meal
(3:37:50) – Relationships
(3:47:02) – Advice for young people
(3:52:16) – Regrets
(3:56:01) – Mortality
(4:12:04) – Love