Transcript for Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs | Lex Fridman Podcast #426

This is a transcript of Lex Fridman Podcast #426 with Edward Gibson. The timestamps in the transcript are clickable links that take you directly to that point in the main video. Please note that the transcript is human generated, and may have errors. Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation. Click link to jump approximately to that part in the transcript:

Introduction

Lex Fridman (00:00:00) Naively I certainly thought that all humans would have words for exact counting, and the Piraha don’t. Okay, so they don’t have any words for even one. There’s not a word for one in their language. And so there’s certainly not a word for two, three or four. And so that blows people’s minds often.
Edward Gibson (00:00:18) Yeah, that’s blowing my mind.
Lex Fridman (00:00:20) That’s pretty weird, isn’t it?
Edward Gibson (00:00:21) How are you going to ask, “I want two of those.”
Lex Fridman (00:00:25) You just don’t. And so that’s just not a thing you can possibly ask in Piraha. It’s not possible, there’s no words for that.
Edward Gibson (00:00:32) The following is a conversation with Edward Gibson, or Ted, as everybody calls him. He’s a psycholinguistics professor at MIT. He heads the MIT language lab that investigates why human languages look the way they do, the relationship between cultural language and how people represent, process and learn language. Also, you should have a book titled Syntax: A Cognitive Approach, published by MIT Press coming out this fall so look out for that. This is the Lex Fridman podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Edward Gibson. When did you first become fascinated with human language?

Human language

Lex Fridman (00:01:17) As a kid in school, when we had to structure sentences and English grammar, I found that process interesting. I found it confusing as to what it was I was told to do. I didn’t understand what the theory was behind it, but I found it very interesting.
Edward Gibson (00:01:34) When you look at grammar, you’re almost thinking about it like a puzzle, almost a mathematical puzzle.
Lex Fridman (00:01:39) Yeah, I think that’s right. I didn’t know I was going to work on this at all at that point. I was a math geek person, computer scientist, I really liked computer science. And then I found language as a neat puzzle to work on from an engineering perspective actually, that’s what I … After I finished my undergraduate degree, which was computer science and math in Canada in Queen’s University, I decided to go to grad school, as that’s what I always thought I would do. And I went to Cambridge where they had a master’s program in computational linguistics. And I hadn’t taken a single language class before. All I’d taken was CS, computer science, math classes pretty much mostly as an undergrad. And I just thought this was an interesting thing to do for a year, because it was a single year program. And then I ended up spending my whole life doing it.
Edward Gibson (00:02:36) Fundamentally, your journey through life was one of a mathematician and a computer scientist. And then you discovered the puzzle, the problem of language, and approached it from that angle to try to understand it from that angle, almost like a mathematician or maybe even an engineer.
Lex Fridman (00:02:53) As an engineer, I’d say … To be frank, I had taken an AI class, I guess it was ’83 or ’84, ’85, somewhere in there a long time ago. And there was a natural language section in there. And it didn’t impress me. I thought, “There must be more interesting things we can do.”
(00:03:09) It seemed just a bunch of hacks to me, it didn’t seem like a real theory of things in any way. And so I just thought this seemed like an interesting area where there wasn’t enough good work.
Edward Gibson (00:03:23) Did you ever come across the philosophy angle of logic? If you think about the 80s with AI, the expert systems where you try to maybe sidestep the poetry of language and some of the syntax and the grammar and all that stuff and go to the underlying meaning that language is trying to communicate and try to somehow compress that in a computer representable way? Did you ever come across that in your studies?
Lex Fridman (00:03:50) I probably did but I wasn’t as interested in it. I was trying to do the easier problems first, the ones I thought maybe were handleable, which seems like the syntax is easier, which is just the forms as opposed to the meaning. When you’re starting talking about the meaning, that’s a very hard problem and it still is a really, really hard problem. But the forms is easier. And so I thought at least figuring out the forms of human language, which sounds really hard but is actually maybe more attractable.
Edward Gibson (00:04:19) It’s interesting. You think there is a big divide, there’s a gap, there’s a distance between form and meaning, because that’s a question you have discussed a lot with LLMs because they’re damn good at form.
Lex Fridman (00:04:33) Yeah, I think that’s what they’re good at, is form. And that’s why they’re good, because they can do form, meanings are …
Edward Gibson (00:04:39) Do you think there’s … Oh, wow. It’s an open question.
Lex Fridman (00:04:42) Yeah.
Edward Gibson (00:04:43) How close form and meaning are. We’ll discuss it but to me studying form, maybe it’s a romantic notion it gives you. Form is the shadow of the bigger meaning thing underlying language. Language is how we communicate ideas. We communicate with each other using language. In understanding the structure of that communication, I think you start to understand the structure of thought and the structure of meaning behind those thoughts and communication, to me. But to you, big gap.

Generalizations in language

Lex Fridman (00:05:19) Yeah.
Edward Gibson (00:05:20) What do you find most beautiful about human language? Maybe the form of human language, the expression of human language.
Lex Fridman (00:05:27) What I find beautiful about human language is some of the generalizations that happen across the human languages, within and across a language. Let me give you an example of something which I find remarkable, that is if a language, if it has a word order such that the verbs tend to come before their objects … English does that. The subject comes first in a simple sentence. I say the dog chased the cat, or Mary kicked the ball, the subject’s first and then after the subject there’s the verb and then we have objects. All these things come after in English. It’s generally a verb. And most of the stuff that we want to say comes after the subject, it’s the objects. There’s a lot of things we want to say they come after. And there’s a lot of languages like that. About 40% of the languages of the world look like that, they’re subject-verb-object languages. And then these languages tend to have prepositions, these little markers on the nouns that connect nouns to other nouns or nouns to verb. When I see a preposition like in or on or of or about, I say I talk about something, the something is the object of that preposition that we have. These little markers come also, just like verbs, they come before their nouns.
(00:06:52) Now, we look at other languages like Japanese or Hindi, these are so-called verb final languages. Maybe a little more than 40%, maybe 45% of the world’s languages or more, 50% of the world’s languages are verb final. Those tend to be post positions. They have the same kinds of markers as we do in English but they put them after. Sorry, they put them first, the markers come first. You say instead of talk about a book, you say a book about, the opposite order there in Japanese or in Hindi. You do the opposite and the talk comes at the end. The verb will come at the end as well. Instead of Mary kicked the ball, it’s Mary ball kicked. And then if it’s Mary kicked the ball to John, it’s John to, the to, the marker there, the preposition, it’s a post position in these languages.
(00:07:52) And so a fascinating thing to me is that within a language, this order aligns, it’s harmonic. And so it’s one or the other, it’s either verb initial or verb final. But then you’ll have prepositions, prepositions or post positions. And that’s across the languages that we can look at. We’ve got around a thousand languages for … There’s around 7,000 languages around on the earth right now. But we have information about say, word order on around a thousand of those pretty. Decent amount of information. And for those thousand which we know about, about 95% fit that pattern. It’s about half and half, half are verb initial like English and half are verb final like Japanese.
Edward Gibson (00:08:41) Just to clarify, verb initial is subject-verb-object.
Lex Fridman (00:08:45) That’s correct.
Edward Gibson (00:08:46) Verb final is still subject-object-verb.
Lex Fridman (00:08:50) That’s correct. Yeah, the subject is generally first
Edward Gibson (00:08:52) That’s so fascinating. I ate an apple, or I apple ate.
Lex Fridman (00:08:57) Yes.
Edward Gibson (00:08:57) Okay. And it’s fascinating that there’s a pretty even division in the world amongst those, 45%.
Lex Fridman (00:09:03) Yeah, it’s pretty even. And those two are the most common by far. Those two word orders, the subject tends to be first. There’s so many interesting things but the thing I find so fascinating is there are these generalizations within and across a language. And there’s actually a simple explanation I think, for a lot of that. And that is you’re trying to minimize dependencies between words. That’s basically the story I think, behind a lot of why word order looks the way it is, is we’re always connecting … What is a thing I’m telling you? I’m talking to you in sentences, you’re talking to me in sentences. These are sequences of words which are connected and the connections are dependencies between the words.
(00:09:47) And it turns out that what we’re trying to do in a language is actually minimize those dependency links. It’s easier for me to say things if the words that are connecting for their meaning are close together. It’s easier for you in understanding if that’s also true. If they’re far away, it’s hard to produce that and it’s hard for you to understand. And the languages of the world, within a language and across languages fit that generalization. It turns out that having verbs initial and then having prepositions ends up making dependencies shorter. And having verbs final and having post positions ends up making dependencies shorter than if you cross them. If you cross them, you just end up … It’s possible, you can do it.
Edward Gibson (00:10:33) You mean within a language?
Lex Fridman (00:10:34) Within a language you can do it. It just ends up with longer dependencies than if you didn’t. And so languages tend to go that way, they call it harmonic. It was observed a long time ago without the explanation, by a guy called Joseph Greenberg, who’s a famous typologist from Stanford. He observed a lot of generalizations about how word order works, and these are some of the harmonic generalizations that he observed.
Edward Gibson (00:10:59) Harmonic generalizations about word order. There’s so many things I want to ask you. Okay, let me just some basics. You mentioned dependencies a few times. What do you mean by dependencies?

Dependency grammar

Lex Fridman (00:11:12) Well, what I mean is in language, there’s three components to the structure of language. One is the sounds. Cat is C, A and T in English. I’m not talking about that part. Then there’s two meaning parts, and those are the words. And you were talking about meaning earlier. Words have a form and they have a meaning associated with them. And so cat is a full form in English and it has a meaning associated with whatever a cat is. And then the combinations of words, that’s what I’ll call grammar or syntax, that’s when I have a combination like the cat or two cats, okay, where I take two different words there and put together and I get a compositional meaning from putting those two different words together. And so that’s the syntax. And in any sentence or utterance, whatever, I’m talking to you, you’re talking to me, we have a bunch of words and we’re putting them together in a sequence, it turns out they are connected, so that every word is connected to just one other word in that sentence. And so you end up with what’s called technically a tree, it’s a tree structure, where there’s a root of that utterance, of that sentence. And then there’s a bunch of dependents, like branches from that root that go down to the words. The words are the leaves in this metaphor for a tree.
Edward Gibson (00:12:34) A tree is also a mathematical construct.
Lex Fridman (00:12:37) Yeah. It’s graph theoretical thing, exactly.
Edward Gibson (00:12:38) A graph theory thing. It’s fascinating that you can break down a sentence into a tree and then every word is hanging onto another, is depending on it.
Lex Fridman (00:12:47) That’s right. All linguists will agree with that, no one …
Edward Gibson (00:12:51) This is not a controversial …
Lex Fridman (00:12:52) That is not controversial.
Edward Gibson (00:12:53) There’s nobody sitting here listening mad at you.
Lex Fridman (00:12:55) I do not think so, I don’t think so.
Edward Gibson (00:12:56) Okay. There’s no linguist sitting there mad at this.
Lex Fridman (00:12:58) No. In every language, I think everyone agrees that all sentences are trees at some level.
Edward Gibson (00:13:05) Can I pause on that?
Lex Fridman (00:13:06) Sure.
Edward Gibson (00:13:06) Because to me just as a layman, it is surprising that you can break down sentences in mostly all languages.
Lex Fridman (00:13:15) All languages, I think.
Edward Gibson (00:13:17) … into a tree.
Lex Fridman (00:13:17) I think so. I’ve never heard of anyone disagreeing with that.
Edward Gibson (00:13:21) That’s weird.
Lex Fridman (00:13:21) The details of the trees are what people disagree about.
Edward Gibson (00:13:25) Well, okay. What’s at the root of a tree? How do you construct … How hard is it? What is the process of constructing a tree from a sentence?
Lex Fridman (00:13:34) Well, this is where depending on what you’re … There’s different theoretical notions. I’m going to say the simplest thing, dependency grammar, a bunch of people invented this. Tesniere was the first French guy back in … The paper was published in 1959 but he was working on it in the 30s and stuff. And it goes back to philologist Panini was doing this in ancient India, okay. The simplest thing we can think of is that there’s just connections between the words to make the utterance. And so let’s just say two dogs entered a room okay, here’s a sentence. And so we’re connecting two and dogs together. There’s some dependency between those words to make some bigger meaning. And then we’re connecting dogs now to entered, and we connect a room somehow to entered. And so I’m going to connect to room and then room back to entered. That’s the tree. The root is entered, the thing is an entering event. That’s what we’re saying here. And the subject, which is whatever that dog is, two dogs it was, and the connection goes back to dogs, and that goes back to two. That’s my tree. It starts at entered, goes to dogs, down to two. And on the other side, after the verb, the object, it goes to room and then that goes back to the determiner or article, whatever you want to call that word. There’s a bunch of categories of words here we’re noticing. There are verbs. Those are these things that typically mark … They refer to events and states in the world. An d they’re nouns, which typically refer to people, places and things is what people say. But they can refer to events themselves as well. They’re marked by the category, the part of speech of a word is how it gets used in language. That’s how you decide what the category of a word is, not by the meaning but how it gets used.
Edward Gibson (00:15:30) How it’s used. What’s usually the root? Is it going to be the verb that defines the event?
Lex Fridman (00:15:36) Usually, usually. Yes, yes. Yeah.
Edward Gibson (00:15:38) Okay.
Lex Fridman (00:15:38) If I don’t say a verb, then there won’t be a verb. And so it’ll be something else.
Edward Gibson (00:15:43) Are we talking about language that’s correct language? What if you’re doing poetry and messing with stuff, then rules go out the window.
Lex Fridman (00:15:51) No. No, no, no, no. You’re constrained by whatever language you’re dealing with. Probably you have other constraints in poetry, such that you’re … Usually in poetry there’s multiple constraints that you want to … You want to usually convey multiple meanings is the idea. And maybe you have a rhythm or a rhyming structure as well. But you usually are constrained by the rules of your language for the most part. And so you don’t violate those too much. You can violate them somewhat but not too much. It has to be recognizable as your language. In English, I can’t say dogs two entered room a. I meant two dogs entered a room, and I can’t mess with the order of the articles and the nouns. You just can’t do that. In some languages, you can mess around with the order of words much more. You speak Russian, Russian has a much freer word order than English. And so in fact, you can move around words in … I told you that English has this subject-verb-object word order, so does Russian but Russian is much freer than English. And so you can actually mess around with a word order. Probably Russian poetry is going to be quite different from English poetry because the word order is much less constrained.
Edward Gibson (00:17:05) Yeah. There’s a much more extensive culture of poetry throughout the history of the last hundred years in Russia. And I always wondered why that is but it seems that there’s more flexibility in the way the language is used. You’re morphing the language easier by altering the words, altering the order of the words and messing with it.
Lex Fridman (00:17:26) Well, you can just mess with different things in each language. And so in Russian you have case markers, which are these endings on the nouns, which tell you how each noun connects to the verb. We don’t have that in English. And so when I say Mary kissed John, I don’t know who the agent or the patient is, except by the order of the words. In Russian, you actually have a marker on the end. If you’re using a Russian name in each of those names, you’ll also say … It’ll be the nominative, which is marking the subject, or an accusative will mark the object. And you could put them in the reverse order. You could put accusative first. You could put the patient first and then the verb and then the subject. And that would be a perfectly good Russian sentence. And it would still … I could say John kissed Mary, meaning Mary kissed John, as long as I use the case markers in the right way, you can’t do that in English. And so
Edward Gibson (00:18:22) I love the terminology of agent and patient and the other ones you used. Those are linguistic terms, correct?
Lex Fridman (00:18:29) Those are for meaning, those are meaning. And subject and object are generally used for position. Subject is just the thing that comes before the verb and the object is the one that comes after the verb. The agent is the thing doing, that’s what that means. The subject is often the person doing the action, the thing.
Edward Gibson (00:18:48) Okay, this is fascinating. How hard is it to form a tree in general? Is there a procedure to it? If you look at different languages, is it supposed to be a very natural … Is it automatable or is there some human genius involved in construction …
Lex Fridman (00:19:01) I think it’s pretty automatable at this point. People can figure out the words are. They can figure out the morphemes, technically morphemes are the minimal meaning units within a language, okay. And so when you say eats or drinks, it actually has two morphemes in English. There’s the root, which is the verb. And then there’s some ending on it which tells you that’s the third person singular.
Edward Gibson (00:19:25) Can you say what morphemes are?
Lex Fridman (00:19:25) Morphemes are just the minimal meaning units within a language. And then a word is just the things we put spaces between in English and they have a little bit more, they have the morphology as well. They have the endings, this inflectional morphology on the endings on the roots.
Edward Gibson (00:19:37) It modifies something about the word that adds additional meaning.
Lex Fridman (00:19:40) Yeah, yeah. And so we have a little bit of that in English, very little. You have much more in Russian, for instance. But we have a little bit in English, and so we have a little on the nouns, you can say it’s either singular or plural. And you can say same thing for verbs. Simple past tense for example, notice in English we say drinks. He drinks, but everyone else is, “I drink, you drink, we drink.”
(00:20:02) It’s unmarked in a way. But in the past tense, it’s just drank for everyone. There’s no morphology at all for past tense. There is morphology that’s marking past tense but it’s an irregular now. Drink to drank, it’s not even a regular word. In many verbs there’s an ED we add. Walk to walked, we add that to say it’s the past tense. I just happen to choose an irregular because the high frequency words tend to have irregulars in English for …
Edward Gibson (00:20:30) What’s an irregular?
Lex Fridman (00:20:31) Irregular is just there isn’t a rule. Drink to drank is an irregular.
Edward Gibson (00:20:35) Drink, drank. Okay, okay. Versus walked.
Lex Fridman (00:20:37) As opposed to walk, walked, talk, talked.
Edward Gibson (00:20:39) Yeah. And there’s a lot of irregulars in English.
Lex Fridman (00:20:42) There’s a lot of irregulars in English. The frequent ones, the common words tend to be irregular. There’s many, many more low frequency words and those are regular ones.
Edward Gibson (00:20:53) The evolution of the irregulars are fascinating because it’s essentially slang that’s sticky because breaking the rules. And then everybody uses it and doesn’t follow the rules and they say screw it to the rules. It’s fascinating. You said morphemes, lots of questions. Morphology is what, the study of morphemes?

Morphology

Lex Fridman (00:21:10) Morphology is the connections between the morphemes onto the roots. In English, we mostly have suffixes. We have endings on the words, not very much but a little bit, as opposed to prefixes. Some words depending on your language can have mostly prefixes, mostly suffixes or both. And then several languages have things called infixes, where you have some general form for the root and you put stuff in the middle, you change the vowels, stuff like that.
Edward Gibson (00:21:45) That’s fascinating, that’s fascinating. In general, there’s what, two morphemes per word? One or two, or three.
Lex Fridman (00:21:51) Well, in English it’s one or two. In English, it tends to be one or two. There can be more. In other languages, a language like Finnish which has a very elaborate morphology, there may be 10 morphemes on the end of a root, okay. And so there may be millions of forms of a given word.
Edward Gibson (00:22:09) Okay, okay. I’ll ask the same question over and over, just sometimes to understand things like morphemes, it’s nice to just ask the question, how do these kinds of things evolve? You have a great book studying how the cognitive processing, how language used for communication, so the mathematical notion of how effective language is for communication, what role that plays in the evolution of language. But just high level, how does a language evolve where English is two morphemes or one or two morphemes per word, and then Finnish has infinity per word. How does that happen? Is it just people …
Lex Fridman (00:22:58) That’s a really good question.
Edward Gibson (00:22:59) Yeah.
Lex Fridman (00:23:00) That’s a very good question, is why do languages have more morphology versus less morphology? And I don’t think we know the answer to this. I think there’s just a lot of good solutions to the problem of communication. I believe as you hinted, that language is an invented system by humans for communicating their ideas. And I think it comes down to we label things we want to talk about. Those are the morphemes and words, those are the things we want to talk about in the world. And we invent those things and then we put them together in ways that are easy for us to convey, to process. But that’s a naive view, and I don’t … I think it’s probably right, it’s naive and probably right.
Edward Gibson (00:23:43) Well, I don’t know if it’s naive. I think it’s simple.
Lex Fridman (00:23:46) Simple, yeah. [inaudible 00:23:47].
Edward Gibson (00:23:47) I think naive is an indication that’s incorrect somehow. It’s trivial, too simple. I think it could very well be correct. But it’s interesting how sticky … It feels like two people got together. It just feels like once you figure out certain aspects of a language that just becomes sticky and the tribe forms around that language, maybe the tribe forms first and then the language evolves. And then you just agree and you stick to whatever that is.
Lex Fridman (00:24:16) These are very interesting questions. We don’t know really about how even words get invented very much. Assuming they get invented, we don’t really know how that process works and how these things evolve. What we have is a current picture of few thousand languages, a few thousand instances. We don’t have any pictures of really how these things are evolving really. And then the evolution is massively confused by contact. As soon as one group runs into another … We are smart, humans are smart and they take on whatever’s useful in the other group. And so any contrast which you’re talking about which I find useful, I’m going to start using as well. I worked a little bit in specific areas of words, in number words and in color words. And in color words, in English we have around 11 words that everyone knows for colors. And many more if you happen to be interested in color for some reason or other, if you’re a fashion designer or an artist or something, you may have many, many more words. But we can see millions. If you have normal color vision, normal trichromatic color vision, you can see millions of distinctions in colors. We don’t have millions of words.
(00:25:43) The most efficient … No, the most detailed color vocabulary would have over a million terms to distinguish all the different colors that we can see. But of course, we don’t have that. Somehow, it’s useful for English to have evolved in some way such that there’s 11 terms that people find useful to talk about, black, white, red, blue, green, yellow, purple, gray, pink. And I probably missed something there. There’s 11 that everyone knows and depending on your … But if you go to different cultures, especially the non-industrialized cultures, there’ll be many fewer. Some cultures will have only two, believe it or not. The Dani in Papua New Guinea have only two labels that the group uses for color, and those are roughly black and white. They are very, very dark and very, very light, which are roughly black and white. And you might think, “Oh, they’re dividing the whole color space into light and dark.”
(00:26:41) Or something. And that’s not really true. They mostly only label the black and the white things. They just don’t talk about the colors for the other ones. And then there’s other groups … I’ve worked with a group called the Tsimane down in Bolivia in South America, and they have three words that everyone knows but there’s a few others that many people know. It’s depending on how you count, between three and seven words that the group knows, okay. And again, they’re black and white, everyone knows those. And red, that tends to be the third word that cultures bring in, if there’s a word. It’s always red, the third one. And then after that, it’s all bets are off about what they bring in. And so after that, they bring in a big blue-green group. They have one for that. And then different people have different words that they’ll use for other parts of the space.
(00:27:39) Anyway, it’s probably related to what they want to talk … Not what they see because they see the same colors as we see. It’s not like they have a low color palette in the things they’re looking at. They’re looking at a lot of beautiful scenery okay, a lot of different colored flowers and berries and things. And so there’s lots of things of very bright colors but they just don’t label the color in those cases. We don’t know this but we think probably what’s going on here is why you label something is you need to talk to someone else about it. And why do I need to talk about a color? Well, if I have two things which are identical and I want you to give me the one that’s different, and the only way it varies is color, then I invent a word which tells you, “This is the one I want. I want the red sweater off the rack, not the green sweater.”
(00:28:35) There’s two. And so those things will be identical because these are things we made and they’re dyed and there’s nothing different about them. And so in industrialized society, everything we’ve got is pretty much arbitrarily colored. But you go to a non-industrialized group, that’s not true. And so it’s not like they’re not interested in color. If you bring bright colored things to them, they like them just like we like them. Bright colors are great, they’re beautiful, but they just don’t need to talk about them, they don’t have to.
Edward Gibson (00:29:07) Probably color words is a good example of how language evolves from function, when you need to communicate the use of something, then you invent different variations. And basically you can imagine that the evolution of a language has to do with what the early tribe’s doing, what problems are facing them and they’re quickly figuring out how to efficiently communicate the solution to those problems, whether it’s aesthetic or functional, all that stuff, running away from a mammoth or whatever. But I think what you’re pointing to is that we don’t have data on the evolution of language because many languages have formed a long time ago, so you don’t get the chatter.

Evolution of languages

Lex Fridman (00:29:50) We have a little bit of old English to modern English because there was a writing system and we can see how old English looked. The word order changed for instance, in old English to middle English to modern English. And so we could see things like that. But most languages don’t even have a writing system. Of the 7,000, only a small subset of those have a writing system. And even if they have a writing system, it’s not a very modern writing system and so they don’t have it … For Mandarin, for Chinese, we have a lot of evidence for a long time, and for English, and not for much else. German a little bit but not for a whole lot of … Long-term language evolution, we don’t have a lot. We have snapshots, is what we’ve got of current languages.
Edward Gibson (00:30:35) You get an inkling of that from the rapid communication on certain platforms. On Reddit, there’s different communities and they’ll come up with different slang, usually from my perspective, driven by a little bit of humor or maybe mockery or whatever, just talking shit in different kinds of ways. And you could see the evolution of language there because I think a lot of things on the internet, you don’t want to be the boring mainstream. You want to deviate from the proper way of talking. And so you get a lot of deviation, rapid deviation. Then when communities collide, you get … Just like you said, humans adapt to it. And you could see it through the lines of humor. It’s very difficult to study but you can imagine a hundred years from now, well, if there’s a new language born for example, we’ll get really high resolution data.
Lex Fridman (00:31:30) English is changing, English changes all the time. All languages change all the time. It was a famous result about the Queen’s English. If you look at the Queen’s vowels, the Queen’s English is supposed to be … Originally the proper way to talk was defined by whoever the queen talked or the king, whoever was in charge. And so if you look at how her vowels changed from when she first became queen in 1952 or ’53 when she was coronated, that’s Queen Elizabeth who died recently of course, until 50 years later, her vowels changed, her vowels shifted a lot. And so even in the sounds of British English, the way she was talking was changing. The vowels were changing slightly. That’s just in the sounds there was change. We’re all interested in what’s driving any of these changes. The word order of English changed a lot over a thousand years. It used to look like German. It used to be a verb final language with case marking. And it shifted to a vermedial language, a lot of contact, a lot of contact with French. And it became a vermedial language with no case marking. And so it became this verb initially thing.
Edward Gibson (00:32:47) It’s evolving.
Lex Fridman (00:32:47) It totally evolved. It doesn’t evolve maybe very much in 20 years, is maybe what you’re talking about but over 50 and 100 years, things change a lot, I think.
Edward Gibson (00:32:57) We’ll now have good data on it, which is great.

Noam Chomsky

Lex Fridman (00:33:00) That’s for sure. Yeah.
Edward Gibson (00:33:01) Can you talk to what is syntax and what is grammar? You wrote a book on syntax.
Lex Fridman (00:33:06) I did. You were asking me before about how do I figure out what a dependency structure is? I’d say the dependency structures aren’t that hard generally, I think there’s a lot of agreement of what they are for almost any sentence in most languages. I think people will agree on a lot of that. There are other parameters in the mix, such that some people think there’s a more complicated grammar than just a dependency structure. Noam Chomsky, he’s the most famous linguist ever, and he is famous for proposing a slightly more complicated syntax. And so he invented phrase structure grammar. He’s well known for many, many things but in the 50s and early 60s but late 50s, he was basically figuring out what’s called formal language theory. And he figured out a framework for figuring out how complicated a certain type of language might be, so-called phrase structured grammars of language might be.
(00:34:06) And so his idea was that maybe we can think about the complexity of a language by how complicated the rules are. And the rules will look like this. They will have a left-hand side and it’ll have a right-hand side. Something on the left-hand side will expand to the thing on the right-hand side. Say, we’ll start with an S, which is the root, which is a sentence, and then we’re going to expand to things like a noun phrase and a verb phrase is what he would say, for instance, okay. An S goes to an NP and a VP, is a phrase structure rule. And then we figure out what an NP is. An NP is a determiner and a noun, for instance. And a verb phrase is something else, is a verb and another noun phrase and another NP, for instance. Those are the rules of a very simple phrase, structure. And so he proposed phrase structure grammar as a way to cover human languages. And then he actually figured out that, “Well, depending on the formals, that- “
Lex Fridman (00:35:00) … for human languages. And then he actually figured out that, well, depending on the formalization of those grammars, you might get more complicated or less complicated languages. He said, “Well, these are things called context-free languages that rule.” He thought human languages tend to be what he calls context-free languages. But there are simpler languages, which are so-called regular languages. And they have a more constrained form to the rules of the phrase structure of these particular rules. So he basically discovered and invented ways to describe the language. And those are phrase structure, a human language. And he was mostly interested in English initially in his work in the ’50s.
Edward Gibson (00:35:43) So quick questions around all of this. So formal language theory is the big field of just studying language formally?
Lex Fridman (00:35:49) Yes. And it doesn’t have to be human language there. We can have a computer languages, any kind of system which is generating some set of expressions in a language. And those could be the statements in a computer language, for example. It could be that, or it could be human language.
Edward Gibson (00:36:10) So technically you can study programming languages?
Lex Fridman (00:36:12) Yes. And heavily studied using this formalism. There’s a big field of programming language within the formal language.
Edward Gibson (00:36:21) And then phrase structure, grammar is this idea that you can break down language into this S NP VP type of thing?
Lex Fridman (00:36:28) Yeah. It’s a particular formalism for describing language. And Chomsky was the first one. He’s the one who figured that stuff out back in the ’50s. And that’s equivalent actually, the context-free grammar is actually, is equivalent in the sense that it generates the same sentences as a dependency grammar would. The dependency grammar is a little simpler in some way. You just have a root and it goes… We don’t have any of these, the rules are implicit, I guess. And we just have connections between words. The free structure grammar is a different way to think about the dependency grammar. It’s slightly more complicated, but it’s kind of the same in some ways.
Edward Gibson (00:37:07) To clarify, dependency grammar is the framework under which you see language and you make a case that this is a good way to describe language.
Lex Fridman (00:37:17) That’s correct.
Edward Gibson (00:37:18) And Noam Chomsky is watching this, is very upset right now, so let’s… Just kidding. Where’s the place of disagreement between phrase structure grammar and dependency grammar?
Lex Fridman (00:37:33) They’re very close. So phrase structure grammar and dependency grammar aren’t that far apart. I like dependency grammar because it’s more perspicuous, it’s more transparent about representing the connections between the words. It’s just a little harder to see in phrase structure grammar.
(00:37:48) The place where Chomsky sort of devolved or went off from this is he also thought there was something called movement. And that’s where we disagree. That’s the place where I would say we disagree. I mean maybe we’ll get into that later. But the idea is… Do you want me to explain that now?
Edward Gibson (00:38:08) I would love, can you explain movement?
Lex Fridman (00:38:10) Movement. Okay.
Edward Gibson (00:38:10) You’re saying so many interesting things.
Lex Fridman (00:38:13) Movement is, Chomsky basically sees English, and he says, “Okay.” I said, we had that sentence earlier. It was like, “Two dogs entered the room.” It’s changed it a little bit. Say, “Two dogs will enter the room.” And he notices that, hey, English, if I want to make a question, a yes/no question from that same sentence, I say, instead of, “Two dogs will enter the room.” I say, “Will two dogs enter the room?” Okay, there’s a different way to say the same idea. And it’s like, well, the auxiliary verb, that will thing, it’s at the front as opposed to in the middle.
(00:38:46) And he looked, if you look at English, you see that that’s true for all those modal verbs and for other kinds of auxiliary verbs. In English, you always do that. You always put an auxiliary verb at the front. And when he saw that, so if I say, “I can win this bet. Can I win this bet?” So I move a can to the front. So actually that’s a theory. I just gave you a theory there. He talks about it as movement. That word in the declarative is the root, is the default way to think about the sentence. And you move the auxiliary verb to the front. That’s a movement theory.
(00:39:19) And he just thought that was just so obvious that it must be true, that there’s nothing more to say about that. This is how auxiliary verbs work in English. There’s a movement rule such that, to get from the declarative to the interrogative, you’re moving the auxiliary to the front. And it’s a little more complicated as soon as you go to simple present and simple past. Because if I say, “John slept,” you have to say, “Did John sleep?” Not. “Slept, John,” right? And so you have to somehow get an auxiliary verb. And I guess underlyingly, it’s like slept is… It’s a little more complicated than that, but that’s his idea. There’s a movement.
(00:39:56) And so a different way to think about that, I mean, then he ended up showing later. So he proposed this theory of grammar, which has movement. There’s other places where he thought there’s movement, not just auxiliary verbs, but things like the passive in English and things like questions, WH questions, a bunch of places where he thought there’s also movement going on. And each one of those, he thinks there’s words, well, phrases and words are moving around from one structure to another, which he called deep structure to surface structure. I mean, there’s two different structures in his theory.
(00:40:29) There’s a different way to think about this, which is there’s no movement at all. There’s a lexical copying rule such that the word will, or the word can, these auxiliary verbs, they just have two forms. And one of them is the declarative and one of them is interrogative. And you basically have the declarative one and oh, I form the interrogative, or I can form one from the other. It doesn’t matter which direction you go. And I just have a new entry which has the same meaning, which has a slightly different argument structure. Argument structure is just a fancy word for the ordering of the words.
(00:41:03) And so if I say, it was, “The two dogs can or will enter the room,” there’s two forms of will. One is, will, declarative. And then, okay, I’ve got my subject to the left, it comes before me, and the verb comes after me in that one. And then the will, interrogative, it’s like, oh, I go first. Interrogative will is first, and then I have the subject immediately after and then the verb after that. And so you can just generate from one of those words another word with a slightly different argument structure with different ordering.
Edward Gibson (00:41:37) And these are just lexical copies. They’re not necessarily moving from one to another?
Lex Fridman (00:41:42) There’s no movement.
Edward Gibson (00:41:42) There’s a romantic notion that you have one main way to use a word and then you could move it around, which is essentially what movement is implying?
Lex Fridman (00:41:53) But that’s the lexical copying is similar. So then we do lexical copying for that same idea that maybe the declarative is the source and then we can copy it. And so an advantage, there’s multiple advantages of the lexical copying story. It’s not my story. This is like Ivan Sag, linguist, a bunch of linguists have been proposing these stories as well, in tandem with the movement story. Ivan Sag died a while ago, but he was one of the proponents of the non-movement of the lexical copying story.
(00:42:24) And so that is that a great advantage is, well Chomsky really famously, in 1971 showed that the movement story leads to learnability problems. It leads to problems for how language is learned. It’s really, really hard to figure out what the underlying structure of a language is if you have both phrase structure and movement. It’s really hard to figure out what came from what. There’s a lot of possibilities there. If you don’t have that problem, the learning problem gets a lot easier.
Edward Gibson (00:42:57) Just say there’s lexical copies. When we say the learning problem, do you mean humans learning a new language?
Lex Fridman (00:43:03) Yeah, just learning English. So baby is lying around listening in the crib, listening to me talk. And how are they learning English? Or maybe it’s a two-year-old who’s learning interrogatives and stuff. How are they doing that? Are they doing it from… Are they figuring out? So Chomsky said it’s impossible to figure it out actually. He said it’s actually impossible, not hard, but impossible. And therefore that’s where universal grammar comes from, is that it has to be built in. And so what they’re learning is that there’s some built-in, movement is built in his story, is absolutely part of your language module. And then you’re just setting parameters. English is just a variant of the universal grammar. And you’re figuring out, oh, which orders does English do these things?
(00:43:53) The non-movement story, it doesn’t have this. It’s much more bottom-up, you’re learning rules. You’re learning rules one by one, and oh, this word is connected to that word. Another advantage, it’s learnable. Another advantage of it is that it predicts that not all auxiliaries might move. It might depend on the word, and that turns out to be true. So there’s words that don’t really work as auxiliary. They work in declarative and not in interrogative. So I can say, I’ll give you the opposite first. I can say, ” Aren’t I invited to the party?” And that’s an interrogative form. But it’s not from, “I aren’t invited to the party.” There is no, “I aren’t.” So that’s interrogative only.
(00:44:42) And then we also have forms like, ought. “I ought to do this.” And I guess some old British people can say…
Edward Gibson (00:44:51) “Ought I?”
Lex Fridman (00:44:51) Exactly. It doesn’t sound right, does it? For me it sounds ridiculous. I don’t even think ought is great. But I mean, I totally recognize, “I ought to do it.” Ought is not too bad actually. I can say, “Ought to do this.” That sounds pretty good.
Edward Gibson (00:45:02) If I’m trying to sound sophisticated, maybe?
Lex Fridman (00:45:04) I don’t know. It just sounds completely odd to me.
Edward Gibson (00:45:06) Ought I?
Lex Fridman (00:45:08) Anyway. So there are variants here. And a lot of these words just work in one versus the other. And that’s fine under the lexical copying story. It’s like, well, you just learn the usage, whatever the usage is, is what you do with this word. But it’s a little bit harder in the movement story. The movement story, that’s an advantage I think, of lexical copying. In all of these different places, there’s all these usage variants which make the movement story a little bit harder to work.
Edward Gibson (00:45:40) One of the main divisions here is the movement story versus the lexical copy story, that has to do about the auxiliary words and so on. But if you just rewind to the phrase structure grammar versus dependency grammar…
Lex Fridman (00:45:52) Those are equivalent in some sense in that for any dependency grammar, I can generate a phrase structure grammar, which generates exactly the same sentences. I just like the dependency grammar formalism because it makes something really salient. Which is the lengths of dependencies between words, which isn’t so obvious in the phrase structure. In the phrase structure, it’s just hard to see. It’s in there, it’s just very, very, it’s opaque.
Edward Gibson (00:46:21) Technically, I think phrase structure grammar is mappable to dependency grammar.
Lex Fridman (00:46:25) And vice versa.
Edward Gibson (00:46:25) And vice versa. But there’s these little labels, S, NP, VP.
Lex Fridman (00:46:30) For a particular dependency grammar you can make a phrase structure grammar, which generates exactly those same sentences and vice versa. But there are many phrase structure grammars, which you can’t really make a dependency grammar. I mean, you can do a lot more in a phrase structure grammar. But you get many more of these extra nodes, basically. You can have more structure in there. And some people like that. And maybe there’s value to that. I don’t like it.
Edward Gibson (00:46:55) Well, for you, so would you clarify. So dependency grammar is just, well, one word depends on only one other word. And you form these trees, and that makes, it really puts priority on those dependencies, just like as a tree that you can then measure the distance of the dependency from one word to the other. They can then map to the cognitive processing of the sentences, how easy it is to understand and all that kind of stuff. So it just puts the focus on just the mathematical distance of dependence between words. So it’s just a different focus.
Lex Fridman (00:47:34) Absolutely.
Edward Gibson (00:47:35) Just to continue on the thread of Chomsky because it’s really interesting. As you’re discussing disagreement to the degree there’s disagreement, you’re also telling the history of the study of language, which is really awesome. So you mentioned context-free versus regular. Does that distinction come into play for dependency grammars?
Lex Fridman (00:47:54) No, not at all. I mean, regular languages are too simple for human languages. It’s a part of the hierarchy. But human languages are, in the phrase structure world are definite. They’re at least context-free, maybe a little bit more, a little bit harder than that.
(00:48:14) So there’s something called context-sensitive as well, where you can have, this is just the formal language description. In a context-free grammar, you have one… This is a bunch of formal language theory we’re doing here.
Edward Gibson (00:48:29) I love it.
Lex Fridman (00:48:29) Okay, so you have a left-hand side category, and you’re expanding to anything on the right. That’s a context-free. So the idea is that that category on the left expands independent of context to those things, whatever they are on the right. It doesn’t matter what. And a context-sensitive says, okay, I actually have more than one thing on the left. I can tell you only in this context, maybe you have a left and a right context or just a left context or a right context. I have two or more stuff on the left tells you how to expand those things in that way. So it’s context-sensitive.
(00:49:02) A regular language is just more constrained, and so it doesn’t allow anything on the right. Basically it’s one very complicated rule is what a regular language is. And so it doesn’t have any… I just say the long distance dependencies, it doesn’t allow recursion, for instance. There’s no recursion. Yeah, recursion is where, which human languages have recursion, they have embedding. Well, it doesn’t allow center-embedded recursion, which human languages have, which is what…
Edward Gibson (00:49:34) Center-embedded recursion, so within a sentence? Within a sentence?
Lex Fridman (00:49:37) Yeah, within a sentence. So here we’re going to get to that, but the formal language stuff is a little aside. Chomsky wasn’t proposing it for human languages even. He was just pointing out that human languages are context-free. Because that was stuff we did for formal languages. And what he was most interested in was human language. And that’s the movement is where he set off on, I would say, a very interesting but wrong foot. I agree, it’s a very interesting history.
(00:50:08) So he proposed this multiple theories in ’57 and then ’65. They all have this framework though. It was phrase structure plus movement, different versions of the phrase structure and the movement in the ’57. These are the most famous original bits of Chomsky’s work. And then in ’71 is when he figured out that those lead to learning problems. That there’s cases where a kid could never figure out which set of rules was intended. And then he said, “Well, that means it’s innate.” It’s kind of interesting. He just really thought the movement was just so obviously true that he didn’t even entertain giving it up. It’s just, that’s obviously right.
(00:50:47) And it was later where people figured out that there’s all these subtle ways in which things which look like generalizations aren’t generalizations across the category. They’re word specific and they kind of work, but they don’t work across various other words in the category. And so it’s easier to just think of these things as lexical copies. And I think he was very obsessed. I don’t know, I’m guessing. But he really wanted this story to be simple in some sense. And language is a little more complicated in some sense. He didn’t like words. He never talks about words. He likes to talk about combinations of words. And words are, you look up a dictionary, there’s 50 senses for a common word. The word take, will have 30 or 40 senses in it.
(00:51:32) So there will be many different senses for common words. And he just doesn’t think about that, or he doesn’t think that’s language. I think he doesn’t think that’s language. He thinks that words are distinct from combinations of words. I think they’re the same. If you look at my brain in the scanner while I’m listening to a language I understand, and you compare, I can localize my language network in a few minutes, in like 15 minutes. And what you do is I listen to a language I know. I listen to maybe some language I don’t know, or I listen to muffled speech or I read sentences and I read non-words. I can do anything like this, anything that’s really like English and anything that’s not very like English. So I’ve got something like it, and not, that I control.
(00:52:16) And the voxels, which is just the 3D pixels in my brain that are responding most is a language area. And that’s this left-lateralized area in my head. And wherever I look in that network, if you look for the combinations versus the words it’s everywhere.
Edward Gibson (00:52:38) It’s the same.
Lex Fridman (00:52:39) It’s the same.
Edward Gibson (00:52:39) That’s fascinating.
Lex Fridman (00:52:39) And so it’s hard to find. There are no areas that we know. I mean, it’s a little overstated right now. At this point, the technology isn’t great, it’s not bad. But we have the best way to figure out what’s going on in my brain when I’m listening or reading language is to use fMRI, functional magnetic resonance imaging. And that’s a very good localization method. So I can figure out where exactly these signals are coming from, pretty down to millimeters, cubic millimeters or smaller, very small. We can figure those out very well.
(00:53:11) The problem is the when. It’s measuring oxygen. And oxygen takes a little while to get to those cells, so it takes on the order of seconds. So I talk fast, I probably listen fast. And I can probably understand things really fast. So a lot of stuff happens in two seconds. And so to say that we know what’s going on, that the words right now in that network, our best guess is that whole network is doing something similar. But maybe different parts of that network are doing different things. And that’s probably the case. We just don’t have very good methods to figure that out at this moment.
Edward Gibson (00:53:49) Since we’re kind of talking about the history of the study of language, what other interesting disagreements, and you’re both at MIT or were for a long time. What interesting disagreements there, tension of ideas are there, between you and Noam Chomsky? And we should say that Noam was in the linguistics department, and you’re, I guess for a time were affiliated there, but primarily brain and cognitive science department. Which is another way of studying language. You’ve been talking about fMRI. Is there something else interesting to bring to the surface about the disagreement between the two of you, or other people at this point?
Lex Fridman (00:54:29) I mean, I’ve been at MIT for 31 years since 1993 and Chomsky’s been there much longer. So I met him, I knew him. I met him when I first got there, I guess. And we would interact every now and then. I’d say our biggest difference is our methods. And so that’s the biggest difference between me and Noam, is that I gather data from people. I do experiments with people and I gather corpus data, whatever corpus data is available, and we do quantitative methods to evaluate any kind of hypothesis we have. He just doesn’t do that. And so he has never once been associated with any experiment or corpus work ever.
(00:55:16) And so it’s all thought experiments. It’s his own intuitions. So I just don’t think that’s the way to do things. That’s across the street, they’re across the street from us, difference between brain and cog-sci and linguistics. I mean, not all linguists, some of the linguists, depending on what you do, more speech-oriented, they do more quantitative stuff. But in the meaning words and well, it’s combinations of words, syntax, semantics, they tend not to do experiments and corpus analyses.
Edward Gibson (00:55:49) On the linguistic side probably, well, but the method is a symptom of a bigger approach. Which is a psychology philosophy side on Noam. For you it’s more data-driven, almost like mathematical approach.
Lex Fridman (00:56:03) Yeah, I mean, I’m a psychologist, so I would say we’re in psychology. Brain cognitive sciences is MIT’s old psychology department. It was a psychology department up until 1985, and it became the Brain and Cognitive Science Department. My training is math and computer science, but I’m a psychologist. I mean, I don’t know what I am.
Edward Gibson (00:56:24) So, data-driven psychologist, you are.
Lex Fridman (00:56:27) I am what I am. But I’m happy to be called a linguist. I’m happy to be called a computer scientist. I’m happy to be called a psychologist, any of those things.
Edward Gibson (00:56:33) But in the actual, like how that manifests itself outside of the methodology is these differences, these subtle differences about the movement story versus the lexical copy story.
Lex Fridman (00:56:43) Yeah. Those are theories.
Edward Gibson (00:56:45) Those are theories.
Lex Fridman (00:56:46) So the theories are… But I think the reason we differ in part is because of how we evaluate the theories. And so I evaluate theories quantitatively and Noam doesn’t.
Edward Gibson (00:56:58) Got it. Okay. Well, let’s explore the theories that you explore in your book. Let’s return to this dependency grammar framework of looking at language. What’s a good justification why the dependency grammar framework is a good way to explain language? What’s your intuition?
Lex Fridman (00:57:17) The reason I like dependency grammar, as I’ve said before, is that it’s very transparent about its representation of distance between words. All it is is you’ve got a bunch of words you’re connecting together to make a sentence. And a really neat insight, which turns out to be true, is that the further apart the pair of words are that you’re connecting, the harder it is to do the production. The harder it is to do the comprehension. If it’s harder to produce, it’s harder to understand when the words are far apart. When they’re close together, it’s easy to produce and it’s easy to comprehend.
(00:57:51) Let me give you an example. We have in any language, we have mostly local connections between words, but they’re abstract. The connections are abstracted between categories of words. And so you can always make things further apart if you add modification, for example, after a noun. So a noun in English comes before a verb, the subject noun comes before a verb, and then there’s an object after, for example.
(00:58:22) So I can say, what I said before, “The dog entered the room,” or something like that. So I can modify dog. If I say something more about dog after it, then what I’m doing is indirectly I’m lengthening the dependence between dog and entered, by adding more stuff to it. So just make it explicit here, if I say, “The boy who the cat scratched, cried.” We’re going to have a mean cat here. And so what I’ve got here is, the boy cried, would be a very short, simple sentence. And I just told you something about the boy, and I told you it was the boy who the cat scratched.
Edward Gibson (00:59:00) So the cry is connected to the boy. The cry at the end, it’s connected to the boy in the beginning.
Lex Fridman (00:59:05) Right. And so I can do that. I can say that, that’s a perfectly fine English sentence. And I can say, “The cat, which the dog chased, ran away,” or something. I can do that. But it’s really hard now, I’ve got, whatever I have here, I have the boy who the cat. Now let’s say I try to modify cat. “The boy, who the cat, which the dog chased, scratched, ran away.” Oh my God, that’s hard, right? I’m just working that through in my head, how to produce, and it’s really very just horrendous to understand. It’s not so bad, at least I’ve got intonation there to mark the boundaries and stuff. But that’s really complicated. That’s sort of English in a way. I mean that follows the rules of English.
(00:59:52) So what’s interesting about that is that what I’m doing is nesting dependencies there. I’m putting one, I’ve got a subject connected to a verb there. And then I’m modifying that with a clause, another clause, which happens to have a subject and a verb relation. I’m trying to do that again on the second one. And what that does is it lengthens out the dependence, multiple dependence actually get lengthened out there. The dependencies get longer, on the outside ones get long, and even the ones in between get kind of long.
(01:00:20) What’s fascinating is that that’s bad. That’s really horrendous in English. But that’s horrendous in any language. No matter what language you look at, if you do, just figure out some structure where I’m going to have some modification following some head, which is connected to some later head, and I do it again, it won’t be good. Guaranteed. 100% that will be uninterpretable in that language in the same way that was uninterpretable in English.
Edward Gibson (01:00:46) Just to clarify, the distance of the dependencies is whenever the boy cried, there’s a dependence between two words and then you’re counting the number of what morphemes between them?
Lex Fridman (01:01:01) That’s a good question. I just say words. Your words are morphemes between, we don’t know that. Actually that’s a very good question. What is the distance metric? But let’s just say it’s words. Sure.
Edward Gibson (01:01:09) Okay. And you’re saying the longer the distance of that dependence, no matter the language, except legalese.
Lex Fridman (01:01:19) Even legalese.
Edward Gibson (01:01:19) Even legalese, okay we’ll talk about it. But that the people will be very upset that speak that language. Not upset, but they’ll either not understand it, or they’d be like, their brain will be working in overtime.
Lex Fridman (01:01:34) They’ll have a hard time either producing or comprehending it. They might tell you that’s not their language. It’s sort of their language. They’ll agree with each of those pieces is part of their language, but somehow that combination will be very, very difficult to produce and understand.
Edward Gibson (01:01:48) Is that a chicken or the egg issue here?
Lex Fridman (01:01:52) Well, I’m giving you an explanation. Well, I mean I’m giving you two kinds of explanations. I’m telling you that center embedding, that’s nesting, those are the same. Those are synonyms for the same concept here. And the explanation for… Those are always hard, center embedding and nesting are always hard. And I gave you an explanation for why they might be hard, which is long distance connections. When you do center embedding, when you do nesting, you always have long distance connections between the dependents.
(01:02:17) So that’s not necessarily the right explanation. I can go through reasons why that’s probably a good explanation. And it’s not really just about one of them. So probably it’s a pair of them or something of these dependents that you get along that drives you to be really confused in that case. And so what the behavioral consequence there, I mean, this is kind of methods, like how do we get at this? You could try to do experiments to get people to produce these things. They’re going to have a hard time producing them. You can try to do experiments to get them to understand them, and see how well they understand them. Can they understand them? Another method you can do is give people partial materials and ask them to complete them, those center embedded materials. And they’ll fail. So I’ve done that. I’ve done all these kinds of things.
Edward Gibson (01:03:04) What do you mean? So center embedding, meaning you can take a normal sentence like, “The boy cried,” and inject a bunch of crap in the middle.
Lex Fridman (01:03:11) Yes.
Edward Gibson (01:03:12) That separates the boy and the cried. Okay. That’s center embedding. And nesting is on top of that?
Lex Fridman (01:03:18) Same thing. No, no, nesting is the same thing. Center embedding, those are totally equivalent terms. I’m sorry, I sometimes use one and sometimes…
Edward Gibson (01:03:25) Got it, got it. Totally equivalent.
Lex Fridman (01:03:26) They don’t mean anything different.
Edward Gibson (01:03:26) Got it. And then what you’re saying is there’s a bunch of different kinds of experiments you can do. I mean, I’d like the understanding one is like have more embedding, more center embedding, is it easier or harder to understand? But then you have to measure the level of understanding, I guess?
Lex Fridman (01:03:39) Yeah, you could. I mean there’s multiple ways to do that. I mean there’s the simplest way is just ask people how good does it sound? How natural does it sound? That’s a very blunt, but very good measure. It’s very reliable. People will do the same thing. And so it’s like, “I don’t know what it means exactly, but it’s doing something.” Such that we’re measuring something about the confusion, the difficulty associated with those.
Edward Gibson (01:03:59) And those are giving you a signal. That’s why you can say they’re… Okay. What about the completion with the center embedding?
Lex Fridman (01:04:05) If you give them a partial sentence, say I say, “The book, which the author who,” and I ask you to now finish that off for me.
Edward Gibson (01:04:15) That breaks people’s brain.
Lex Fridman (01:04:18) Yeah, yeah. But say it’s written in front of you and you can just have as much time as you want. Even though that one’s not too hard, right? So if I say, “It’s like the book, it’s like, oh, the book, which the author who I met wrote was good.” That’s a very simple completion for that.
(01:04:33) If I give that to completion online somewhere, to a crowdsourcing platform and ask people to complete that, they will miss off a verb very regularly. Like half the time, maybe two-thirds of the time, they’ll just leave off one of those verb phrases. Even with that simple… So say, “The book, which the author who…” And they’ll say, “Was…” You need three verbs, I need three verbs. “Who I met, wrote, was good.” And they’ll give me two. They’ll say, “Who was famous was good.” Or something like that. They’ll just give me two. And that’ll happen about 60% of the time. So 40%, maybe 30%, they’ll do it correctly. Correctly, meaning they’ll do a three-verb phrase. I don’t know what’s correct or not. This is hard. It’s a hard task.
Edward Gibson (01:05:20) Yeah, actually I’m struggling with it in my head.
Lex Fridman (01:05:22) Well, it’s easier written.
Edward Gibson (01:05:24) When you stare at it, it’s easier?
Lex Fridman (01:05:25) If you look, it’s a little easier than listening is pretty tough. Because there’s no trace of it. You have to remember the words that I’m saying, which is very hard auditorily. We wouldn’t do it this way. You’d do it written, you can look at it and figure it out. It’s easier in many dimensions, in some ways, depending on the person. It’s easier to gather written data for… I mean most, I work in psycholinguistics, psychology of language and stuff. And so a lot of our work is based on written stuff because it’s so easy to gather data from people doing written kinds of tasks.
(01:05:56) Spoken tasks are just more complicated to administer and analyze because people do weird things when they speak. And it’s harder to analyze what they do, but they generally point to the same kinds of things.
Edward Gibson (01:06:10) So the universal theory of language by Ted Gibson is that you can form dependency, you can form trees for many sentences.
Lex Fridman (01:06:21) That’s right.
Edward Gibson (01:06:21) You can measure the distance in some way of those dependencies. And then you can say that most languages have very short dependencies.
Lex Fridman (01:06:30) All languages.
Edward Gibson (01:06:31) All languages.
Lex Fridman (01:06:32) All languages have short dependencies. You can actually measure that. So an ex-student of mine, this guy’s at University of California Irvine. Richard Futrell did a thing a bunch of years ago now, where he looked at all the languages we could look at, which was about 40 initially. And now I think there’s about 64, which there are dependency structures. Meaning it’s got to be a big text, a bunch of texts which have been parsed for their dependency structures. And there’s about 60 of those which have been parsed that way. And for all of those, what he did was take any sentence in one of those languages and you can do the dependency structure and then start at the root. We’re talking about dependency structures. That’s pretty easy now. And he’s trying to figure out what a control way you might say the same sentence is in that language.
(01:07:21) And so we just like, all right, there’s a root. And let’s say a sentence is, let’s go back to, “Two dogs entered the room.” So entered is the root. And entered has two dependents. It’s got dogs and it has room. And what he did is let’s scramble that order, that’s three things, the root and the head, and the two dependents, and into some random order, just random. And then just do that for all the dependents down the tree. So now look, do it for the, and whatever, there’s two in dogs and room. And it’s a very short sentence. When sentences get longer and you have more dependents, there’s more scrambling that’s possible. So, you could figure out one scrambling for that sentence.
(01:08:02) He did this a hundred times for every sentence in every one of these texts, every corpus. And then he just compared the dependency lengths in those random scramblings to what actually happened, what the English or the French or the German was in original language or Chinese or what all these like 60 languages. And the dependency lengths are always shorter in the real language compared to this kind of a control.
(01:08:28) And there’s another, it’s a little more rigid, his control. So the way I described it, you could have crossed dependencies. By scrambling that way, you could scramble in any way at all, languages don’t do that. They tend not to cross dependencies very much. So the dependency structure, they tend to keep things non-crossed. There’s a technical term they call that projective, but it’s just non-crossed is all that is, projective. And so if you just constrain the scrambling so that it only gives you projective, non-crossed, the same thing holds.
(01:09:03) So still human languages are much shorter than this kind of a control. So what it means is that in every language, we’re trying to put things close, relative to this kind of a control. It doesn’t matter about the word order. Some of these are verb final. Some of them these are verb medial like English. And some are even verb initial, there are a few languages in the world which have VSO, word order verb, subject, object, languages. Haven’t talked about those. It’s like 10% of them.
Edward Gibson (01:09:33) And even in those languages…
Lex Fridman (01:09:34) It doesn’t matter.
Edward Gibson (01:09:36) It’s still short dependencies?
Lex Fridman (01:09:37) Short dependencies is rules.
Edward Gibson (01:09:39) Okay, so what are some possible explanations for that? For why languages have evolved that way? So that’s one of the, I suppose, disagreements you might have with Chomsky. So you consider the evolution of language in terms of information theory. And for you, the purpose of language is ease of communication.
Edward Gibson (01:10:00) For you, the purpose of language is ease of communication, right, and processing?
Lex Fridman (01:10:04) That’s right. That’s right. The story here is just about communication. It is just about production, really. It’s about ease of production, is the story.
Edward Gibson (01:10:13) When you say production, can you-
Lex Fridman (01:10:15) Oh, I just mean ease of language production. What I’m doing whenever I’m talking to you is somehow I’m formulating some idea in my head and I’m putting these words together, and it’s easier for me to do that, to say something where the words are closely connected in a dependency as opposed to separated by putting something in between and over and over again. It’s just hard for me to keep that in my head. That’s the whole story. The story is basically the dependency grammar sort of gives that to you, just like long is bad, short is good. It’s easier to keep in mind because you have to keep it in mind probably for production, probably matters in comprehension as well. Also matters in comprehension.
Edward Gibson (01:10:58) It’s on both sides of it, the production and the-
Lex Fridman (01:11:00) But I would guess it’s probably evolved for production. It’s about producing. It’s what’s easier for me to say that ends up being easier for you also. That’s very hard to disentangle, this idea of who is it for? Is it for me the speaker, or is it for you, the listener? Part of my language is for you. The way I talk to you is going to be different from how I talk to different people, so I’m definitely angling what I’m saying to who I’m saying. It’s not like I’m just talking the same way to every single person. And so I am sensitive to my audience, but does that work itself out in the dependency length differences? I don’t know. Maybe that’s about just the words, that part, which words I select.
Edward Gibson (01:11:41) My initial intuition is that you optimize language for the audience.
Lex Fridman (01:11:47) But it’s both.
Edward Gibson (01:11:48) It’s just kind of messing with my head a little bit to say that some of the optimization may be the primary objective. The optimization might be the ease of production.
Lex Fridman (01:11:57) We have different senses, I guess. I’m very selfish and I think it’s all about me. I’m like, “I’m just doing what’s easiest for me at all times.”
Edward Gibson (01:12:06) What’s easiest for me.
Lex Fridman (01:12:09) But I have to, of course, choose the words that I think you’re going to know. I’m not going to choose words you don’t know. In fact, I’m going to fix that. But maybe for the syntax, for the combinations, it’s just about me. I don’t know though. It’s very hard to-
Edward Gibson (01:12:24) Wait, wait, wait. But the purpose of communication is to be understood-
Lex Fridman (01:12:24) Absolutely.
Edward Gibson (01:12:28) … is to convince others and so on. So the selfish thing is to be understood. It’s about the listener.
Lex Fridman (01:12:32) Okay. It’s a little circular there too then. Okay.
Edward Gibson (01:12:34) Right. The ease of production-
Lex Fridman (01:12:37) Helps me be understood then. I don’t think it’s circular.
Edward Gibson (01:12:43) No, I think the primary objective is about the listener because otherwise, if you’re optimizing for the ease of production, then you’re not going to have any of the interesting complexity of language. You’re trying to explain-
Lex Fridman (01:12:55) Well, let’s control for what it is I want to say. I’m saying let’s control for the thing, the message. Control for the message I want to tell you-
Edward Gibson (01:13:02) But that means the message needs to be understood. That’s the goal.
Lex Fridman (01:13:05) But that’s the meaning. So I’m still talking about just the form of the meaning. How do I frame the form of the meaning is all I’m talking about. You’re talking about a harder thing, I think. It’s like trying to change the meaning. Let’s keep the meaning constant.
Edward Gibson (01:13:20) Got it.
Lex Fridman (01:13:20) If you keep the meaning constant, how can I phrase whatever it is I need to say? I got to pick the right words and I’m going to pick the order so it’s easy for me. That’s what I think is probably-
Edward Gibson (01:13:32) I think I’m still tying meaning and form together in my head, but you’re saying if you keep the meaning of what you’re saying constant, the optimization, it could be The primary objective of that optimization is for production. That’s interesting. I’m struggling to keep constant meaning. I’m a human, right? So for me, without having introspected on this, the form and the meaning are tied together deeply because I’m a human. For me when I’m speaking, because I haven’t thought about language in a rigorous way, about the form of language-
Lex Fridman (01:14:16) But look, for any event, there’s I don’t want to say infinite, but unbounded ways of that I might communicate that same event. Those two dogs entered a room I can say in many, many different ways. I could say, :Hey, there’s two dogs. They entered the room.” “Hey, the room was entered by something. The thing that was entered was two dogs.” That’s kind of awkward and weird and stuff, but those are all similar messages with different forms, different ways I might frame, and of course I use the same words there all the time.
(01:14:49) I could have referred to the dogs as a Dalmatian and a poodle or something. I could have been more specific or less specific about what they are, and I could have been more abstract about the number. So I am trying to keep the meaning, which is this event, constant. And then how am I going to describe that to get that to you? It kind of depends on what you need to know and what I think you need to know. But let’s get control for all that stuff and I’m just choosing, but I’m doing something simpler than you’re doing, which is just forms, just words.
Edward Gibson (01:15:22) To you, specifying the breed of dog and whether they’re cute or not is changing the meaning.
Lex Fridman (01:15:30) That might be, yeah. Well, that would be changing the meaning for sure.
Edward Gibson (01:15:33) Right. So you’re just-
Lex Fridman (01:15:36) That’s changing the meaning. But say even if we keep that constant, we can still talk about what’s easier or hard for me, the listener. Which phrase structures I use, which combinations.
Edward Gibson (01:15:49) This is so fascinating and just a really powerful window into human language, but I wonder still throughout this how vast the gap between meaning and form. I just have this maybe romanticized notion that they’re close together, that they evolve close, hand in hand. That you can’t just simply optimize for one without the other being in the room with us. Well, it’s kind of like an iceberg. Form is the tip of the iceberg and the rest, the meaning is the iceberg, but you can’t separate.
Lex Fridman (01:16:26) But I think that’s why these large language models are so successful, is because good at form and form isn’t that hard in some sense. And meaning is tough still, and that’s why they don’t understand. We’re going to talk about that later maybe, but we can distinguish, forget about large language models, talking humans, maybe you’ll talk about that later too, is the difference between language, which is a communication system, and thinking, which is meaning. So language is a communication system for the meaning. It’s not the meaning. And there’s a lot of interesting evidence we can talk about relevant to that.

Thinking and language

Edward Gibson (01:17:04) Well, that’s a really interesting question. What is the difference between language written communicated versus thought? What to you is the difference between them?
Lex Fridman (01:17:18) Well, you or anyone has to think of a task which they think is a good thinking task, and there’s lots and lots of tasks which would be good thinking tasks. And whatever those tasks are, let’s say it’s playing chess, that’s a good thinking task, or playing some game or doing some complex puzzles, maybe remembering some digits, that’s thinking, a lot of different tasks we might think. Maybe just listening to music is thinking. There’s a lot of different tasks we might think of as thinking.
(01:17:47) There’s this woman in my department, Fedorenko, and she’s done a lot of work on this question about what’s the connection between language and thought. And so she uses, I was referring earlier to MRI, fMRI, that’s her primary method. And so she has been really fascinated by this question about what language is. And so as I mentioned earlier, you can localize my language area or your language area in a few minutes, like 15 minutes. I can listen to language, listen to non-language or backward speech or something, and we’ll find areas left lateralized network in my head, which especially is very sensitive to language as opposed to whatever that control was.
Edward Gibson (01:18:28) Can you specify what you mean by language? Like communicating language? What is language?
Lex Fridman (01:18:31) Just sentences. I’m listening to English of any kind, a story, or I can read sentences. Anything at all that I understand, if I understand it, then it’ll activate my language network.
Edward Gibson (01:18:42) [inaudible 01:18:42]
Lex Fridman (01:18:42) My language network is going like crazy when I’m talking and when I’m listening to you because we’re communicating.
Edward Gibson (01:18:48) And that’s pretty stable.
Lex Fridman (01:18:49) Yeah, it’s incredibly stable. So I happen to be married to this woman at Fedorenko, and so I’ve been scanned by her over and over and over since 2007 or ’06 or something, and so my language network is exactly the same a month ago as it was back in 2007.
Edward Gibson (01:18:49) Oh, wow.
Lex Fridman (01:19:05) It’s amazingly stable. It’s astounding. It’s a really fundamentally cool thing. And so my language network is like my face. Okay? It’s not changing much over time inside my head.
Edward Gibson (01:19:17) Can I ask a quick question? Sorry, it’s a small tangent. At which point as you grow up from baby to adult does it stabilize?
Lex Fridman (01:19:25) We don’t know. That’s a very hard question. They’re working on that right now because of the problem scanning little kids. Trying to do the localization on little children in this scanner where you’re lying in the fMRI scan, that’s the best way to figure out where something’s going on inside our brains, and the scanner is loud and you’re in this tiny little area, you’re claustrophobic and it doesn’t bother me at all. I can go sleep in there, but some people are bothered by it and little kids don’t really like it and they don’t like to lie still, and you have to be really still because if you move around, that messes up the coordinates of where everything is. And so your question is how and when are language developing? How does this left lateralized system come to play? And it’s really hard to get a two-year-old to do this task, but you can maybe they’re starting to get three and four and five-year-olds to do this task for short periods and it looks like it’s there pretty early.
Edward Gibson (01:20:19) So clearly, when you lead up to a baby’s first words, before that there’s a lot of fascinating turmoil going on about figuring out what are these people saying and you’re trying to make sense, how does that connect to the world and all that kind of stuff. That might be just fascinating development that’s happening there. That’s hard to introspect. But anyway-
Lex Fridman (01:20:42) We’re back to the scanner, and I can find my network in 15 minutes and now we can ask, “Find my network, find yours, find 20 other people to do this task,” and we can do some other tasks. Anything else you think is thinking of some other thing. I can do a spatial memory task. I can do a music perception task. I can do programming task if I program, where I can understand computer programs, and none of those tasks tap the language network at all. At all. There’s no overlap. They’re highly activated in other parts of the brain. There’s a bilateral network, which I think she tends to call the multiple demands network, which does anything kind of hard. And so anything that’s kind of difficult in some ways will activate that multiple demands network. Music will be in some music area, there’s music specific kinds of areas, but none of them are activating the language area at all unless there’s words. So if you have music and there’s a song and you can hear the words, then you get the language area.
Edward Gibson (01:21:46) Are we talking about speaking and listening or are we also talking about reading?
Lex Fridman (01:21:50) This is all comprehension of any kind-
Edward Gibson (01:21:53) That is fascinating.
Lex Fridman (01:21:54) … so this network, doesn’t make any difference if it’s written or spoken. So the thing that Fedorenko calls the language network is this high level language, so it’s not about the spoken language and it’s not about the written language, it’s about either one of them. And so when you do speech, you listen to speech and you subtract away some language you don’t understand or you subtract away backwards speech, which sounds like speech but isn’t. Then you take away the sound part altogether and then if you do written, you get exactly the same network for just reading the language versus reading nonsense words or something like that.
(01:22:34) You’ll find exactly the same network. And so this is about high level the comprehension of language in this case. Production’s a little harder to run the scanner, but the same thing happens in production. You get the same network, so production’s a little harder. You have to figure out how do you run a task in the network such that you’re doing some kind of production? And I can’t remember, they’ve done a bunch of different kinds of tasks there where you get people to produce things, figure out how to produce, and the same network goes on there exactly the same place.
Edward Gibson (01:23:02) Wait, wait, so if you read random words-
Lex Fridman (01:23:05) If you read things like-
Edward Gibson (01:23:07) Gibberish.
Lex Fridman (01:23:08) … Lewis Carroll’s, “‘Twas brillig,” Jabberwocky, right? They call that Jabberwocky speech-
Edward Gibson (01:23:13) The network doesn’t get activated.
Lex Fridman (01:23:15) Not as much. There are words in there. There’s function words and stuff, so it’s lower activation.
Edward Gibson (01:23:21) That’s fascinating.
Lex Fridman (01:23:22) So basically the more language it is, the higher it goes in the language network. And that network is there from when you speak, from as soon as you learn language and it’s there, you speak multiple languages, the same network is going for your multiple languages. So you speak English and you speak Russian, both of them are hitting that same network if you’re fluent in those languages.
Edward Gibson (01:23:44) Programming-
Lex Fridman (01:23:45) Not at all. Isn’t that amazing? Even if you’re a really good programmer, that is not a human language. It is just not conveying the same information, and so it is not in the language network.
Edward Gibson (01:23:56) Is that as mind-blowing as I think? That’s weird.
Lex Fridman (01:23:59) It’s pretty cool. It is amazing.
Edward Gibson (01:23:59) That’s pretty weird.
Lex Fridman (01:24:00) So that’s one set of data. Hers shows that what you might think is thinking is not language. Language is just this conventionalized system that we’ve worked out in human languages. Oh, another fascinating little tidbit is that even if they’re these constructed languages like Klingon or I don’t know the languages from Game of Thrones, I’m sorry, I don’t remember those languages. Maybe you-
Edward Gibson (01:24:25) There’s a lot of people offended right now.
Lex Fridman (01:24:26) … there’s people that speak those languages. They really speak those languages because the people that wrote the languages for the shows, they did an amazing job of constructing something like a human language and that lights up the language area because they can speak pretty much arbitrary thoughts in a human language. It’s a constructed human language, and probably it’s related to human languages because the people that were constructing them were making them like human languages in various ways, but it also activates the same network, which is pretty cool. Anyway.
Edward Gibson (01:24:59) Sorry to go into a place where you may be a little bit philosophical, but is it possible that this area of the brain is doing some kind of translation into a deeper set of almost like concepts?
Lex Fridman (01:25:13) That what it has to be doing. It’s doing in communication. It is translating from thought, whatever that is, it’s more abstract, and that’s what it’s doing. That is kind of what it is doing. It’s a meaning network, I guess.
Edward Gibson (01:25:27) Yeah, like a translation network. But I wonder what is at the core at the bottom of it? What are thoughts? Thoughts and words, are they neighbors or is it one turtle sitting on top of the other, meaning is there a deep set of concepts that we-
Lex Fridman (01:25:46) Well, there’s connections between what these things mean and then there’s probably other parts of the brain, but what these things mean. And so when I’m talking about whatever it is I want to talk about, it’ll be represented somewhere else. That knowledge of whatever that is will be represented somewhere else.
Edward Gibson (01:26:02) Well, I wonder if there’s some stable-
Lex Fridman (01:26:04) That’s meaning.
Edward Gibson (01:26:05) … nicely compressed encoding of meanings-
Lex Fridman (01:26:08) I don’t know.
Edward Gibson (01:26:08) … that’s separate from language. I guess the implication here is that we don’t think in language.
Lex Fridman (01:26:19) That’s correct. Isn’t that cool? And that’s so interesting. This is hard to do experiments on, but there is this idea of inner voice, and a lot of people have an inner voice. And so if you do a poll on the internet and ask if you hear yourself talking when you’re just thinking or whatever, about 70 or 80% of people will say yes. Most people have an inner voice. I don’t, and so I always find this strange. So when people talk about an inner voice, I always thought this was a metaphor. And they hear, I know most of you, whoever’s listening to this thinks I’m crazy now because I don’t have an inner voice and I just don’t know what you’re listening to. It sounds so kind of annoying to me, to have this voice going on while you’re thinking, but I guess most people have that, and I don’t have that and we don’t really know what that connects to.
Edward Gibson (01:27:08) I wonder if the inner voice activates that same network. I wonder.
Lex Fridman (01:27:12) I don’t know. This could be speechy, right? So that’s like you hear. Do you have an inner voice?
Edward Gibson (01:27:18) I don’t think so.
Lex Fridman (01:27:18) Oh. A lot of people have this sense that they hear themselves and then say they read someone’s email, I’ve heard people tell me that they hear that this other person’s voice when they read other people’s emails and I’m like, “Wow, that sounds so disruptive.”
Edward Gibson (01:27:33) I do think I vocalize what I’m reading, but I don’t think I hear a voice.
Lex Fridman (01:27:38) Well, you probably don’t have an inner voice.
Edward Gibson (01:27:40) I don’t think I have an inner voice.
Lex Fridman (01:27:41) People have an inner voice. People have this strong percept of hearing sound in their heads when they’re just thinking.
Edward Gibson (01:27:48) I refuse to believe that’s the majority of people.
Lex Fridman (01:27:50) Majority, absolutely.
Edward Gibson (01:27:51) What?
Lex Fridman (01:27:52) It’s like two-thirds or three-quarters. It’s a lot. Whenever I ask the class and I went internet, they always say that. So you’re in a minority.
Edward Gibson (01:27:59) It could be a self-report flaw.
Lex Fridman (01:28:01) It could be.
Edward Gibson (01:28:02) When I’m reading inside my head, I’m kind of saying the words, which is probably the wrong way to read, but I don’t hear a voice. There’s no percept of a voice. I refuse to believe the majority of people have it. Anyway, the human brain is fascinating, but it still blew my mind that that language does appear, comprehension does appear to be separate from thinking.
Lex Fridman (01:28:32) So that’s one set. One set of data from Fedorenko’s group is that no matter what task you do, if it doesn’t have words and combinations of words in it, then it won’t light up the language network you. It’ll be active somewhere else but not there, so that’s one. And then this other piece of evidence relevant to that question is it turns out there are this group of people who’ve had a massive stroke on the left side and wiped out their language network, and as long as they didn’t wipe out everything on the right as well, in that case, they wouldn’t be cognitively functionable. But if they just wiped out language, which is pretty tough to do because it’s very expansive on the left, but if they have, then there’s patients like this called, so-called global aphasics, who can do any task just fine, but not language.
(01:29:23) You can’t talk to them, they don’t understand you. They can’t speak, they can’t write, they can’t read. But they can play chess, they can drive their cars, they can do all kinds of other stuff, do math. So math is not in the language area, for instance. You do arithmetic and stuff, that’s not in language area. It’s got symbols, so people confuse some kind of symbolic processing with language, and symbolic processing is not the same. So there are symbols and they have meaning, but it’s not language. It’s not a conventionalized language system, and so math isn’t there. And so they can do math. They do just as well as their control age matching controls and all these tasks. This is Rosemary Varley over in University College of London who has a bunch of patients who she’s shown this. So that sort of combination suggests that language isn’t necessary for thinking. It doesn’t mean you can’t think in language. You could think in language because language allows a lot of expression, but it’s just you don’t need it for thinking. It suggests that language is a separate system from-
Edward Gibson (01:30:24) This is kind of blowing my mind right now.
Lex Fridman (01:30:24) It’s cool, isn’t it?
Edward Gibson (01:30:26) I’m trying to load that in because it has implications for large language models.
Lex Fridman (01:30:32) It sure does, and they’ve been working on that.

LLMs

Edward Gibson (01:30:35) Well, let’s take a stroll there. You wrote that the best current theories of human language are arguably large language models, so this has to do with form.
Lex Fridman (01:30:43) It’s a kind of a big theory, but the reason it’s arguably the best is that it does the best at predicting what’s English, for instance. It’s incredibly good, better than any other theory, but there’s not enough detail.
Edward Gibson (01:31:01) Well, it’s opaque. You don’t know what’s going on.
Lex Fridman (01:31:03) You don’t know what’s going on.
Edward Gibson (01:31:05) Black box.
Lex Fridman (01:31:06) It’s in a black box. But I think it is a theory.
Edward Gibson (01:31:08) What’s your definition of a theory? Because it’s a gigantic black box with a very large number of parameters controlling it. To me, theory usually requires a simplicity, right?
Lex Fridman (01:31:20) Well, I don’t know, maybe I’m just being loose there. I think it’s not a great theory, but it’s a theory. It’s a good theory in one sense in that it covers all the data. Anything you want to say in English, it does. And so that’s how it’s arguably the best, is that no other theory is as good as a large language model in predicting exactly what’s good and what’s bad in English. Now, you’re saying is it a good theory? Well, probably not because I want a smaller theory than that. It’s too big, I agree.
Edward Gibson (01:31:47) You could probably construct mechanism by which it can generate a simple explanation of a particular language, like a set of rules. Something like it could generate a dependency grammar for a language, right?
Lex Fridman (01:32:03) Yes.
Edward Gibson (01:32:03) You could probably just ask it about itself.
Lex Fridman (01:32:12) Well, that presumes, and there’s some evidence for this, that some large language models are implementing something like dependency grammar inside them. And so there’s work from a guy called Chris Manning and colleagues over at Stanford in natural language. And they looked at I don’t know how many large language model types, but certainly BERT and some others, where you do some kind of fancy math to figure out exactly what kind of abstractions of representations are going on, and they were saying it does look like dependency structure is what they’re constructing. It’s actually a very, very good map, so they are constructing something like that. Does it mean that they’re using that for meaning? Probably, but we don’t know.
Edward Gibson (01:33:01) You write that the kinds of theories of language that LLMs are closest to are called construction-based theories. Can you explain what construction-based theories are?
Lex Fridman (01:33:09) It’s just a general theory of language such that there’s a form and a meaning pair for lots of pieces of the language. And so it’s primarily usage-based is a construction grammar. It’s trying to deal with the things that people actually say and actually write, and so it’s a usage-based idea. What’s a construction? Construction’s either a simple word, so a morpheme plus its meaning or a combination of words. It’s basically combinations of words, the rules, but it’s unspecified as to what the form of the grammar is underlyingly. And so I would argue that the dependency grammar is maybe the right form to use for the types of construction grammar. Construction grammar typically isn’t formalized quite, and so maybe the a formalization of that, it might be in dependency grammar. I would think so, but it’s up to other researchers in that area if they agree or not.
Edward Gibson (01:34:16) Do you think that large language models understand language? Are they mimicking language? I guess the deeper question there is, are they just understanding the surface form or do they understand something deeper about the meaning that then generates the form?
Lex Fridman (01:34:33) I would argue they’re doing the form. They’re doing the form, they’re doing it really, really well. And are they doing the meaning? No, probably not. There’s lots of these examples from various groups showing that they can be tricked in all kinds of ways. They really don’t understand the meaning of what’s going on. And so there’s a lot of examples that he and other groups have given which show they don’t really understand what’s going on. So the Monty Hall problem is this silly problem. Let’s Make a Deal is this old game show, and there’s three doors and there’s a prize behind one, and there’s some junk prizes behind the other two and you’re trying to select one. And Monty, he knows where the target item is. The good thing, he knows everything is back there, and he gives you a choice.
(01:35:25) You choose one of the three and then he opens one of the doors and it’s some junk prize. And then the question is, should you trade to get the other one? And the answer is, yes, you should trade because he knew which ones you could turn around, and so now the odds are two-thirds. And then if you just change that a little bit to the large language model, the large language model has seen that explanation so many times. If you change the story, it’s a little bit, but you make it sound like it’s the Monty Hall problem, but it’s not. You just say, “Oh, there’s three doors and one behind them is a good prize and there’s two bad doors. I happen to know it’s behind door number one. The good prize, the car is behind door number one, so I’m going to choose door number one.”
(01:36:03) Monty Hall opens door number three and shows me nothing there. Should I trade for door number two, even though I know the good prize in door number one? And then the large language model say, “Yes, you should trade,” because it just goes through the forms that it’s seen before so many times on these cases where yes, you should trade because your odds have shifted from one in three now to two out of three to being that thing. It doesn’t have any way to remember that actually, you have 100% probability behind that door number one. You know that. That’s not part of the scheme that it’s seen hundreds and hundreds of times before. And so even if you try to explain to it that it’s wrong, that they can’t do that, it’ll just keep giving you back the problem.
Edward Gibson (01:36:45) But it’s also possible the larger language model would be aware of the fact that there’s sometimes over-representation of a particular kind of formulation, and it’s easy to get tricked by that. And so you could see if they get larger and larger, models be a little bit more skeptical, so you see over-representation. It just feels like training on form can go really far in terms of being able to generate things that look like the thing understands deeply the underlying world model, of the kind of mathematical world, physical world, psychological world that would generate these kinds of sentences. It just feels like you’re creeping close to the meaning part, easily fooled, all this kind of stuff, but that’s humans too. So it just seems really impressive how often it seems like it understands concepts.
Lex Fridman (01:37:54) You don’t have to convince me of that. I am very, very impressed. You’re giving a possible world where maybe someone’s going to train some other versions such that it’ll be somehow abstracting away from types of forms, I don’t think that’s happened. And so-
Edward Gibson (01:38:12) Well, no, no, no, I’m not saying that. I think when you just look at anecdotal examples and just showing a large number of them where it doesn’t seem to understand and it’s easily fooled, that does not seem like a scientific data-driven analysis of how many places is it damn impressive in terms of meaning and understanding and how many places is easily fooled?
Lex Fridman (01:38:36) That’s not the inference, so I don’t want to make that. The inference I wouldn’t want to make was that inference. The inference I’m trying to push is just that is it like humans here? It’s probably not like humans here, it’s different. So humans don’t make that error. If you explain that to them, they’re not going to make that error. They don’t make that error. And so it’s doing something different from humans that they’re doing. In that case,
Edward Gibson (01:38:59) What’s the mechanism by which humans figure out that it’s an error?
Lex Fridman (01:39:02) I’m just saying the error there is if I explained to you there’s a hundred percent chance that the car is behind this door, do you want to trade people say no, but this thing will say yes because it’s that trick, it’s so wound up on the form. That’s an error that a human doesn’t make, which is kind of interesting.
Edward Gibson (01:39:23) Less likely to make, I should say.
Lex Fridman (01:39:25) Less likely.
Edward Gibson (01:39:26) Because you’re asking a system to understand 100%, you’re asking some mathematical concepts.
Lex Fridman (01:39:40) But the places where language models are, the form is amazing. So let’s go back to nested structure, center-embedded structures. If you ask a human to complete those, they can’t do it. Neither can a large language model. They’re just like humans in that. If I ask a large language model-
Edward Gibson (01:39:56) That’s fascinating, by the way. The central embedding struggles with anyone-
Lex Fridman (01:40:01) Just like humans. Exactly the same way as humans, and that’s not trained. So that is a similarity, but that’s not meaning. This is form. But when we get into meaning, this is where they get kind of messed up. When you start just saying, “Oh, what’s behind this door? Oh, this is the thing I want,” humans don’t mess that up as much. Here, the form is just like. The form of the match is amazingly similar without being trained to do that. It’s trained in the sense that it’s getting lots of data, which is just like human data, but it’s not being trained on bad sentences and being told what’s bad. It just can’t do those. It’ll actually say things like, “Those are too hard for me to complete or something,” which is kind of interesting, actually. How does it know that? I don’t know.
Edward Gibson (01:40:51) But it really often doesn’t just complete, it very often says stuff that’s true and sometimes says stuff that’s not true. And almost always the form is great, but it’s still very surprising that with really great form, it’s able to generate a lot of things that are true based on what it’s trained on and so on. So it’s not just form that is generating, it’s mimicking true statements-
Lex Fridman (01:41:24) That’s right, that’s right. I think that’s right.
Edward Gibson (01:41:25) … from the internet. I guess the underlying idea there is that on the internet, truth is overrepresented versus falsehoods.
Lex Fridman (01:41:33) I think that’s probably right.
Edward Gibson (01:41:35) But the fundamental thing it’s trained on you’re saying is just form, and it’s really-
Lex Fridman (01:41:40) I think so.
Edward Gibson (01:41:42) Well, to me, that’s still a little bit of an open question. I probably lean agreeing with you, especially now you’ve just blown my mind that there’s a separate module in the brain for language versus thinking. Maybe there’s a fundamental part missing from the large language model approach that lacks the thinking, the reasoning capability.
Lex Fridman (01:42:08) Yeah, that’s what this group argues. So the same group, Fedorenko’s group has a recent paper arguing exactly that. There’s a guy called Kyle Mahowald who’s here in Austin, Texas, actually. He’s an old student of mine, but he’s a faculty in linguistics at Texas, and he was the first author on that.
Edward Gibson (01:42:27) That’s fascinating. Still to me, an open question. What to you are the interesting limits of LLMs?
Lex Fridman (01:42:35) I don’t see any limits to their form. Their form is perfect.
Edward Gibson (01:42:35) Impressive, perfect.
Lex Fridman (01:42:35) It’s pretty close to being-
Edward Gibson (01:42:39) Well, you said ability to complete central embeddings.
Lex Fridman (01:42:39) Yeah. It’s just the same as humans. It seems the same as humans.
Edward Gibson (01:42:47) But that’s not perfect, right? It should be able to-
Lex Fridman (01:42:51) That’s good. No, but I want it to be like humans. I want a model of humans.
Edward Gibson (01:42:55) Oh, wait, wait. Oh, so perfect to you is as close to humans as possible.
Lex Fridman (01:42:59) Yeah.
Edward Gibson (01:42:59) I got it. But if you’re not human, you’re superhuman, you should be able to complete central embedded sentences, right?
Lex Fridman (01:43:07) The mechanism is, if it’s modeling, I think it’s kind of really interesting that it can’t.
Edward Gibson (01:43:13) That is really interesting.
Lex Fridman (01:43:14) I think it’s potentially underlying modeling something like the way the form is processed.
Edward Gibson (01:43:21) The form of human language and how humans process the language.
Lex Fridman (01:43:26) Yes. I think that’s plausible.

Center embedding

Edward Gibson (01:43:27) And how they generate language, process language and generate language. That’s fascinating. So in that sense, they’re perfect. If we can just linger on the center embedding thing, that’s hard for LLMs to produce and that seems really impressive because hard for humans to produce. And how does that connect to the thing we’ve been talking about before, which is the dependency grammar framework in which you view language, and the finding that short dependencies seem to be a universal part of language? So why is it hard to complete center embeddings?
Lex Fridman (01:44:02) So what I like about dependency grammar is it makes the cognitive cost associated with longer distance connections very transparent. Basically, it turns out there is a cost associated with producing and comprehending connections between words, which are just not beside each other. The further apart they are, the worse it is. We can measure that and there is a cost associated with that.
Edward Gibson (01:44:31) Can you just linger on what do you mean by cognitive cost and how do you measure it?
Lex Fridman (01:44:36) Sure. Well, you can measure it in a lot of ways. The simplest is just asking people to say how good a sentence sounds. Just ask. That’s one way to measure, and you try to triangulate then across sentences and across structures to try to figure out what the source of that is. You can look at reading times in controlled materials, in certain kinds of materials, and then we can measure the dependency distances there. There’s a recent study-
Lex Fridman (01:45:00) … the dependency distance is there. There’s a recent study which looked at, we’re talking about the brain here. We could look at the language network. We could look at the language network and we could look at the activation in the language network and how big the activation is, depending on the length of the dependencies. It turns out in just random sentences that you’re listening to, if you listen, as it turns out there are people listening to stories here. The longer the dependency is, the stronger the activation in the language network. So, there’s some measure… There’s a bunch of different measures we could do. That’s kind of a neat measure actually of actual-
Edward Gibson (01:45:40) Activation.
Lex Fridman (01:45:41) … activation in the brain.
Edward Gibson (01:45:42) So then you can somehow in different ways convert it to a number. I wonder if there’s a beautiful equation connecting cognitive costs and length of dependency. E equals MC squared kind of thing.
Lex Fridman (01:45:51) It’s complicated, but probably it’s doable. I would guess it’s doable. I tried to do that a while ago, and I was reasonably successful, but for some reason I stopped working on that. I agree with you that it would be nice to figure out… So, there’s some way to figure out the cost. It’s complicated.
(01:46:08) Another issue you raised before was how do you measure distance? Is it words? It probably isn’t, is part of the problem, is that some words matter than more than others meaning nouns might matter. And then it maybe depends on which kind of noun. Is it a noun we’ve already introduced or a noun that’s already been mentioned? Is it a pronoun versus a name? All these things probably matter. So, probably the simplest thing to do is just like, oh, let’s forget about all that and just think about words or morphemes.
Edward Gibson (01:46:38) For sure. But there might be some insight in the kind of function that fits the data. Meaning like quadratic… What-
Lex Fridman (01:46:50) I think it’s an exponential.
Edward Gibson (01:46:50) Exponential.
Lex Fridman (01:46:51) So, we think it’s probably an exponential such that the longer the distance, the less it matters. So then it’s the sum of those, that was our best guess a while ago. So you’ve got a bunch of dependencies. If you’ve got a bunch of them that are being connected at some point, at the ends of those, the cost is some exponential function of those is my guess. But because the reason it’s probably an exponential is it’s not just the distance between two words. I can make a very, very long subject verb dependency by adding lots and lots of noun phrases and prepositional phrases and it doesn’t matter too much. It’s when you do nested, when I have multiple of these, then things go really bad, go south.
Edward Gibson (01:47:34) That’s probably somehow connected to working memory or something like this?
Lex Fridman (01:47:37) Yeah, that’s probably a function of the memory here is the access, is trying to find those earlier things. It’s kind of hard to figure out what was referred to earlier. Those are those connections. That’s the notion of murky… As opposed to a storage-y thing, but trying to connect, retrieve those earlier words depending on what was in between. Then we’re talking about interference of similar things in between. The right theory probably has that kind of notion, it is an interference of similar.
(01:48:06) So, I’m dealing with abstraction over the right theory, which is just, let’s count words, it’s not right, but it’s close. Then maybe you’re right though. There’s some sort of an exponential or something to figure out the total so we can figure out a function for any given sentence in any given language. But it’s funny, people haven’t done that too much, which I do think is… I’m interested that you find that interesting. I really find that interesting and a lot of people haven’t found it interesting. I don’t know why I haven’t got people to want to work on that. I really like that too.
Edward Gibson (01:48:36) That’s a beautiful idea, and the underlying idea is beautiful, that there’s a cognitive cost that correlates with the length of dependency. It feels like, I mean, language is so fundamental to the human experience. This is a nice, clean theory of language where it’s like, “Wow, okay, so we like our words close together, depend words close together.”
Lex Fridman (01:49:00) Yeah, that’s why I like it too. It’s so simple. It’s so simple.
Edward Gibson (01:49:02) Yeah, the simplicity of the theory is good.
Lex Fridman (01:49:04) And yet it explains some very complicated phenomena. If I write these very complicated sentences, it’s kind of hard to know why they’re so hard. And you can like, oh, nail it down. I can give you a math formula for why each one of them is bad and where, and that’s kind of cool. I think that’s very neat.
Edward Gibson (01:49:20) Have you gone through the process… Is there, if you take a piece of text and then simplify, there’s an average length of dependency and then you reduce it and see comprehension on the entire, not just a single sentence, but you go from James Joyce to Hemingway or something.
Lex Fridman (01:49:42) No, no. Simple answer is no. There’s probably things you can do in that kind of direction.
Edward Gibson (01:49:47) That’s fun.
Lex Fridman (01:49:49) We’re going to talk about legalese at some point, and so maybe we’ll talk about that kind of thinking with applied to legalese.
Edward Gibson (01:49:55) Let’s talk about legalese because you mentioned that as an exception. We’re just taking a tangent upon tangent, that’s an interesting one, you give it as an exception.
Lex Fridman (01:50:02) It’s an exception.
Edward Gibson (01:50:04) That you say that most natural languages, as we’ve been talking about, have local dependencies with one exception, legalese.
Lex Fridman (01:50:12) That’s right.
Edward Gibson (01:50:13) So, what is legalese first of all?
Lex Fridman (01:50:15) Oh, well, legalese is what you think it is. It’s just any legal language.
Edward Gibson (01:50:20) Well, I actually know very little about the kind of language that lawyers use.
Lex Fridman (01:50:24) So, I’m just thinking about language in laws and language and contracts.
Edward Gibson (01:50:28) Got it.
Lex Fridman (01:50:29) The stuff that you have to run into, we have to run into every other day or every day and you skip over because it reads poorly or partly it’s just long, right? There’s a lot of texts there that we don’t really want to know about. But the thing I’m interested in, so I’ve been working with this guy called Eric Martinez. He was a lawyer who was taking my class. I was teaching a psycholinguistics lab class, I have been teaching it for a long time at MIT, and he was a law student at Harvard. He took the class because he had done some linguistics as an undergrad and he was interested in the problem of why legalese sounds hard to understand. So, why is it hard to understand and why do they write that way if it is so hard to understand? It seems apparent that it’s hard to understand. The question is, why is it?
(01:51:19) So, we didn’t know and we did an evaluation of a bunch of contracts. Actually, we just took a bunch of random contracts. I don’t know, there’s contracts and laws might not be exactly the same, but contracts are the things that most people have to deal with most of the time. That’s the most common thing that humans have, that adults in our industrialized society have to deal with a lot. That’s what we pulled and we didn’t know what was hard about them, but it turns out that the way they’re written is very center embedded. It has nested structures in them. So, it has low frequency words as well. That’s not surprising. Lots of texts have low… It does have surprising slightly lower frequency words than other kinds of control texts, even academic texts. Legalese is even worse. It is the worst that we were being able to find-
Edward Gibson (01:52:10) Fascinating. You just reveal the game that lawyers are playing.
Lex Fridman (01:52:13) They’re not though.
Edward Gibson (01:52:13) That they’re optimizing a different… Well-
Lex Fridman (01:52:15) It’s interesting. Now you’re getting at why. So, now you’re saying they’re doing intentionally. I don’t think they’re doing intentionally, but let’s-
Edward Gibson (01:52:23) It’s an emergent phenomena. Okay.
Lex Fridman (01:52:25) We’ll get to that. We’ll get to that. But we wanted to see what first as opposed… Because it turns out that we’re not the first to observe that legalese is weird. Back to Nixon had a plain language act in 1970 and Obama had one. Boy, a lot of these presidents that said, “Oh, no, we’ve got to simplify legal language. Must simplify it.” But if you don’t know how it’s complicated, it’s not easy to simplify it. You need to know what it is you’re supposed to do before you can fix it. So, you need a psycholinguist to analyze the text and see what’s wrong with it before you can fix it. You don’t know how to fix it. How am I supposed to fix something I don’t know what’s wrong with it?
(01:53:05) And so what we did, that’s what we did. We figured out, okay, just a bunch of contracts, had people… And we encoded them for a bunch of features. Another feature, one of them was center embedding. That is basically how often a clause would intervene between a subject and a verb, for example. That’s one center embedding of a clause, and turns out they’re massively center embedded. So, I think in random contracts and in random laws, I think you get about 70%, something like 70% of sentences have a center embedded clause, which is insanely high.
(01:53:43) If you go to any other text, it’s down to 20% or something. It’s so much higher than any control you can think of, including, you think, people think, oh, technical, academic texts. No, people don’t write center embedded sentences in technical academic texts. They do a little bit, but it’s on the 20%, 30% realm as opposed to 70. So, there’s that and there’s low frequency words. Then people, oh, maybe it’s passive. People don’t like the passive. Passive for some reason, the passive voice in English has a bad rap, and I’m not really sure where that comes from. And there is a lot of passive, there’s much more passive voice in legalese than there is in other texts-
Edward Gibson (01:54:23) And passive voice accounts for some of the low frequency words?
Lex Fridman (01:54:26) No, no. Those separate. Those are separate.
Edward Gibson (01:54:28) [inaudible 01:54:28] I apologize. Oh, so passive voice sucks. Low frequency words sucks.
Lex Fridman (01:54:31) Well sucks is different. So, these are different-
Edward Gibson (01:54:32) That’s a judgment I’m passing?
Lex Fridman (01:54:33) Yeah, yeah, yeah. Drop the judgment. It’s just like, these are frequent. These are things which happen in legalese text. Then we can ask, the dependent measure is how well you understand those things with those features. And it turns out the passive makes no difference. So, it has zero effect on your comprehension ability, on your recall ability. Nothing at all, it has no effect. The words matter a little bit. Low frequency words are going to hurt you in recall and understanding, but what really hurts is the center of embedding.
Edward Gibson (01:55:01) Center embedding.
Lex Fridman (01:55:02) That kills you. That slows people down, that makes them very poor in understanding, they can’t recall what was said as well, nearly as well. And we did this not only on laypeople, we did on a lot of laypeople, we ran on a hundred lawyers. We recruited lawyers from a wide range of different levels of law firms and stuff. And they have the same pattern. When they did this, I did not know what would happen. I thought maybe they could process… They’re used to legalese, maybe they process it just as well as if it was normal.
(01:55:37) No, no, they’re much better than laypeople. So, they can much better recall, much better at understanding. But they have the same main effects as laypeople. Exactly the same. So, they also much prefer the non-center… So, we constructed non-center embedded versions of each of these. We constructed versions which have higher frequency words in those places. And we did, we un-passivized, we turned them into active versions. The passive/active made no difference. The words made a little difference. And the un-center embedding makes big differences in all the populations.
Edward Gibson (01:56:12) Un-center embedding. How hard is that process, by the way?
Lex Fridman (01:56:15) Not very hard.
Edward Gibson (01:56:16) Sorry, dumb question, but how hard is it to detect center embedding?
Lex Fridman (01:56:19) Oh, easy. Easy to detect. [inaudible 01:56:21]-
Edward Gibson (01:56:20) You’re just looking at long dependencies or is there a real-
Lex Fridman (01:56:23) So, there’s automatic parsers for English, which are pretty good.
Edward Gibson (01:56:27) And they can detect center embedding?
Lex Fridman (01:56:28) Oh yeah, very good.
Edward Gibson (01:56:29) Or I guess nesting?
Lex Fridman (01:56:30) Perfectly. [inaudible 01:56:32]. Pretty much.
Edward Gibson (01:56:32) So, you’re not just looking for long dependencies, you’re just literally looking for center embedding.
Lex Fridman (01:56:36) Yeah, we are in this case, in these cases. But long dependencies, they’re highly correlated, these things to this.
Edward Gibson (01:56:40) All right. So, like center embedding is a big bomb you throw inside of a sentence that just blows out, that makes super-
Lex Fridman (01:56:47) Yeah. Can I read a sentence for you from these things?
Edward Gibson (01:56:49) Sure.
Lex Fridman (01:56:50) I’ll see if I can find. I mean, this is just one of the things that… This is just [inaudible 01:56:53]-
Edward Gibson (01:56:52) My eyes might glaze over in mid-sentence. No, I understand that. I mean, legalese is hard.
Lex Fridman (01:57:00) Here we go. It goes, “In the event that any payment or benefit by the company, all such payments and benefits, including the payments and benefits under section 3(A) hereof being here and after referred to as a total payments would be subject to the excise tax, then the cash severance payments shall be reduced.” So that’s something we pulled from a regular text from a contract.
Edward Gibson (01:57:18) Wow.
Lex Fridman (01:57:19) And the center embedded bit there is just for some reason, there’s a definition. They throw the definition of what payments and benefits are in between the subject and the verb. How about don’t do that? How about put the definition somewhere else as opposed to in the middle of the sentence? That’s very, very common, by the way. That’s what happens. You just throw your definitions, you use a word, a couple of words, and then you define it and then you continue the sentence. Just don’t write like that.
(01:57:47) So then we asked lawyers, we thought, “Oh, maybe lawyers like this.” Lawyers don’t like this, they don’t like this. They don’t want to write like this. We asked them to rate materials which are with the same meaning, with un-center embedded and center embedded, and they much preferred the un-center embedded versions.
Edward Gibson (01:58:05) On comprehension, on the reading side.
Lex Fridman (01:58:07) And we asked them, “Would you hire someone who writes like this or this?” We asked them all kinds of questions and they always preferred the less complicated version, all of them. So, I don’t even think they want it this way.
Edward Gibson (01:58:18) But how did it happen?
Lex Fridman (01:58:19) How did it happen? That’s a very good question. And the answer is, I still don’t know, but-
Edward Gibson (01:58:25) I have some theories.
Lex Fridman (01:58:27) Our best theory at the moment is that there’s actually some kind of a performative meaning in the center embedding and the style, which tells you it’s legalese. We think that that’s the kind of a style which tells you it’s legalese, it’s a reasonable guess. And maybe it’s just, so for instance, it’s like a magic spell. We kind of call this the magic spell hypothesis. When you tell someone to put a magic spell on someone, what do you do? People know what a magic spell is and they do a lot of rhyming. That’s kind of what people will tend to do. They’ll do rhyming and they’ll do some kind of poetry kind of thing.
Edward Gibson (01:59:03) Abracadabra type of thing.
Lex Fridman (01:59:05) Yeah. And maybe there’s a syntactic reflex here of a magic spell, which is center embedding. And so that’s like, “Oh, it’s trying to tell you this is something which is true,” which is what the goal of law is. It’s telling you something that we want you to believe is certainly true. That’s what legal contracts are trying to enforce on you. And so maybe that’s a form which has… This is like an abstract, very abstract form center embedding, which has a meaning associated with it.
Edward Gibson (01:59:36) Well, don’t you think there’s an incentive for lawyers to generate things that are hard to understand?
Lex Fridman (01:59:45) That was one of our working hypotheses. We just couldn’t find any evidence of that.
Edward Gibson (01:59:49) No, lawyers also don’t understand it, but you’re creating space-
Lex Fridman (01:59:54) But when you ask lawyers-
Edward Gibson (01:59:55) You ask in a communist Soviet Union, the individual members, their self-report is not going to correctly reflect what is broken about the gigantic bureaucracy then leads to Chernobyl or something like this. I think the incentives under which you operate are not always transparent to the members within that system. So, it just feels like a strange coincidence that there is benefit if you just zoom out, look at the system as opposed to asking individual lawyers, that making something hard to understand is going to make a lot of people money.
(02:00:36) You’re going to need a lawyer to figure that out, I guess from the perspective of the individual. But then that could be the performative aspect. It could be as opposed to the incentive driven to be complicated. It could be performative to where, “We lawyers speak in this sophisticated way and you regular humans don’t understand it, so you need to hire a lawyer.” Yeah, I don’t know which one it is, but it’s suspicious. Suspicious that it’s hard to understand and that everybody’s eyes glaze over and they don’t read.
Lex Fridman (02:01:04) I’m suspicious as well. I’m still suspicious and I hear what you’re saying. It could be kind, no individual and even average of individuals. It could just be a few bad apples in a way which are driving the effect in some way.
Edward Gibson (02:01:17) Influential bad apples that everybody looks up to or whatever, they’re central figures in how-
Lex Fridman (02:01:25) But it is kind of interesting that-
Edward Gibson (02:01:28) It’s fascinating.
Lex Fridman (02:01:28) … among our hundred lawyers, they did not share that.
Edward Gibson (02:01:31) They didn’t want this. That’s fascinating.
Lex Fridman (02:01:32) They really didn’t like it. And so it gave us hope-
Edward Gibson (02:01:34) And they weren’t better than regular people at comprehending it.
Lex Fridman (02:01:38) They were much-
Edward Gibson (02:01:38) Or they were on average better-
Lex Fridman (02:01:38) But they had the same difference.
Edward Gibson (02:01:40) … but the same difference.
Lex Fridman (02:01:41) Exact same difference, but they wanted it fixed. And so that gave us hope that because it actually isn’t very hard to construct a material which is un-center embedded and has the same meaning. It’s not very hard to do. Just basically in that situation, you’re just putting definitions outside of the subject verb relation in that particular example, and that’s pretty general. What they’re doing is just throwing stuff in there, which you didn’t have to put in there.
(02:02:09) There’s extra words involved, typically. You may need a few extra words to refer to the things that you’re defining outside in some way. If you only use it in that one sentence, then there’s no reason to introduce extra terms. So, we might have a few more words, but it’ll be easier to understand. I have hope that now that maybe we can make legalese less convoluted in this way.
Edward Gibson (02:02:35) So, maybe the next President of the United States can, instead of saying generic things, say-
Lex Fridman (02:02:39) Say exactly-
Edward Gibson (02:02:40) “I ban center embeddings, and make Ted the language czar of-
Lex Fridman (02:02:44) Well, make Eric. Martinez is the guy you should really put in there.
Edward Gibson (02:02:53) Eric Martinez, yeah. But center embeddings are the bad thing to have.
Lex Fridman (02:02:56) That’s right.
Edward Gibson (02:02:57) If you get rid of that-
Lex Fridman (02:02:58) That’ll do a lot of it. That’ll fix a lot.
Edward Gibson (02:03:00) That’s fascinating. That is so fascinating. And just really fascinating on many fronts that humans are just not able to deal with this kind of thing and that language because of that evolved in the way you did. It’s fascinating. So, one of the mathematical formulations you have when talking about languages communication is this idea of noisy channels. What’s a noisy channel?
Lex Fridman (02:03:25) That’s about communication. And so this is going back to Shannon. Claude Shannon was a student at MIT in the ’40s. And so he wrote this very influential piece of work about communication theory or information theory, and he was interested in human language. Actually. He was interested in this problem of communication, of getting a message from my head to your head. And he was concerned or interested in what was a robust way to do that.
(02:03:59) And so that assuming we both speak the same language, we both already speak English, whatever the language is, we speak that what is a way that I can say the language so that it’s most likely to get the signal that I want to you. And then the problem there in the communication is the noisy channel, is that I make… There’s a lot of noise in the system. I don’t speak perfectly. I make errors. That’s noise. There’s background noise, you know that.
Edward Gibson (02:04:30) Like literal.
Lex Fridman (02:04:31) Literal background noise. There is white noise in the background or some other kind of noise or some speaking going on or just you’re at a party, that’s background noise. You’re trying to hear someone, it’s hard to understand them because there’s all this other stuff going on in the background. And then there’s noise on the communication, on the receiver side so that you have some problem maybe understanding me for stuff that’s just internal to you in some way. You’ve got some other problems, whatever, with understanding for whatever reasons. Maybe you’ve had too much to drink. Who knows why you’re not able to pay attention to the signal.
(02:05:04) So, that’s the noisy channel. And so that language, if it’s communication system, we are trying to optimize in some sense that the passing of the message from one side to the other. And so one idea is that maybe aspects of word order, for example, might’ve optimized in some way to make language a little more easy to be passed from speaker to listener. And so Shannon’s, the guy that did this stuff way back in the forties, it was very interesting.
(02:05:34) Historically he was interested in working in linguistics. He was at MIT and this was his master’s thesis of all things. It’s crazy how much he did for his master’s thesis in 1948 I think, or ’49 something. And he wanted to keep working in language and it just wasn’t a popular communication as a reason source for what language was, wasn’t popular at the time. So, Chomsky was moving in there and he just wasn’t able to get a handle there, I think. And so he moved to [inaudible 02:06:04] and worked on communication from a mathematical point of view and did all kinds of amazing work. And so he’s just-
Edward Gibson (02:06:12) More on the signal side versus the language side.
Lex Fridman (02:06:16) Yeah.
Edward Gibson (02:06:16) It would’ve been interesting to see if he pursued the language side. That’s really interesting.
Lex Fridman (02:06:20) He was interested in that. His examples in the ’40s are like, they’re very language-like things. We can kind of show that there’s a noisy channel process going on in, when you’re listening to me, you can often guess what you think I meant, given what I said. And I mean, with respect to sort of why language looks the way it does, there might be, as I alluded to, there might be ways in which word orders is somewhat optimized because of the noisy channel in some way.
Edward Gibson (02:06:53) That’s really cool to model if you don’t hear certain parts of a sentence or have some probability of missing that part, how do you construct a language that’s resilient to that, that’s somewhat robust to that?
Lex Fridman (02:07:04) Yeah, that’s the idea.
Edward Gibson (02:07:06) And then you’re kind saying the word order and the syntax of language, the dependency length are all helpful to do-
Lex Fridman (02:07:14) Well, dependency length is really about memory. I think that’s about what’s easier or harder to produce in some way. And these other ideas are about robustness to communication. So, the problem of potential loss of signal due to noise. There may be aspects of word order which is somewhat optimized for that. And we have this one guess in that direction… And these are kind of just so stories, I have to be pretty frank. They’re not like, I can’t show, this is true. All we can do is look at the current languages of the world.
(02:07:44) We can’t see how languages change or anything because we’ve got these snapshots of a few hundred or a few thousand languages. We can’t do the right kinds of modifications to test things experimentally. And so just take this with a grain of salt from here, this stuff. The dependency stuff, I’m much more solid on and here’s what the lengths are and here’s what’s hard, here’s what’s easy, and this is a reasonable structure. I think I’m pretty reasonable. Why does the word order look the way it does is, we’re now into shaky territory, but it’s kind of cool.
Edward Gibson (02:08:17) We’re talking about, just to be clear, we’re talking about maybe just actually the sounds of communication. You and I are sitting in a bar, it’s very loud, and you model with a noisy channel, the loudness, the noise, and we have the signal that’s coming across and you’re saying word to order might have something to do with optimizing that when there’s presence of noise.
Lex Fridman (02:08:40) Yes.
Edward Gibson (02:08:40) It’s really interesting. To me, it’s interesting how much you can load into the noisy channel. How much can you bake in? You said cognitive load on the receiver end-
Lex Fridman (02:08:49) We think that there’s at least three different kinds of things going on there. We probably don’t want to treat them all as the same. And so I think that the right model, a better model of a noisy channel would have three different sources of noise, which are background noise, speaker-inherent noise and listener-inherent noise. And those are all different things.
Edward Gibson (02:09:11) Sure. But then underneath it, there’s a million other subsets of what-
Lex Fridman (02:09:12) That’s true.
Edward Gibson (02:09:17) … on the receiving, I just mentioned cognitive load on both sides. Then there’s speech impediments or just everything, worldview. The meaning. We’ll start to creep into the meaning realm of we have different worldviews.
Lex Fridman (02:09:32) Well, how about just form still though? Just what language you know.
Edward Gibson (02:09:35) [inaudible 02:09:35].
Lex Fridman (02:09:35) So, how well you know the language, and so if it’s second language for you versus first language and how maybe what other languages you know. These are still just form stuff and that’s potentially very informative. And how old you are, these things probably matter. So, child learning a language is as a noisy representation of English grammar, depending on how old they are. Maybe when they’re six, they’re perfectly formed, but…

Learning a new language

Edward Gibson (02:10:03) You mentioned one of the things is a way to measure a language is learning problems. So, what’s the correlation between everything we’ve been talking about and how easy it’s to learn a language? Is a short dependencies correlated to ability to learn a language? Is there some kind of… Or the dependency grammar, is there some kind of connection there? How easy it is to learn?
Lex Fridman (02:10:30) Well, all the languages in the world’s language, none is right now we know is any better than any other with respect to optimizing dependency lengths, for example. They’re all kind of do it, do it well. They all keep low. So, I think of every human language as some kind of an optimization problem, a complex optimization problem to this communication problem. And so they’ve solved it. They’re just noisy solutions to this problem of communication. There’s just so many ways you can do this.
Edward Gibson (02:11:00) So, they’re not optimized for learning. They’re probably optimized for communication.
Lex Fridman (02:11:05) And learning. So yes, one of the factors which-
Edward Gibson (02:11:06) Uh-oh.
Lex Fridman (02:11:07) So, learning is messing this up a bit. And so for example, if it were just about minimizing dependency lengths and that was all that matters, then we might find grammars which didn’t have regularity in their rules. But languages always have regularity in their rules. What I mean by that is that if I wanted to say something to you and the optimal way to say it was what really mattered to me, all that mattered was keeping the dependencies as close together as possible, then I would have a very lax set of free structure or dependency rules. I wouldn’t have very many of those. I would’ve very little of that. And I would just put the words as close to the things that refer to the things that are connected right beside each other. But we don’t do that.
(02:11:51) There are word order rules. And depending on the language, they’re more and less strict. So, you speak Russian, they’re less strict than English. English is very rigid word order rules. We order things in a very particular way. And so why do we do that? That’s probably not about communication. That’s probably about learning. Then we’re talking about learning. It’s probably easier to learn regular things, things which are very predictable and easy. So, that’s probably about learning is our guess because that can’t be about communication.
Edward Gibson (02:12:21) Can it be just noise? Can it be just the messiness of the development of a language?
Lex Fridman (02:12:26) If it were just a communication, then we should have languages which have very, very free word order. And we don’t have that. We have free-er, but not free. There’s always-
Edward Gibson (02:12:35) Well, no, but what I mean by noise is cultural, sticky cultural things. Like the way you communicate, there’s a stickiness to it, that it’s an imperfect, it’s a noisy optimist… Stochastic, the function over which you’re optimizing is very noisy. Because it feels weird to say that learning is part of the objective function, because some languages are way harder to learn than others. Or that’s not true?
Lex Fridman (02:13:04) That’s not true.
Edward Gibson (02:13:06) That’s interesting.
Lex Fridman (02:13:07) I mean, yes-
Edward Gibson (02:13:07) That’s the public perception, right?
Lex Fridman (02:13:09) Right? Yes, that’s true for a second language.
Edward Gibson (02:13:12) For second language, [inaudible 02:13:13]-
Lex Fridman (02:13:12) But that depends on what you started with, right? So, it really depends on how close that second language is to the first language you’ve got. And so yes, it’s very, very hard to learn Arabic if you’ve started with English or it’s hard to learn Japanese or if you’ve started with… Chinese, I think is the worst. There’s Defense Language Institute in the United States has a list of how hard it is to learn what language from English, I think Chinese is the worse-
Edward Gibson (02:13:40) But this is the second language. You’re saying babies don’t care.
Lex Fridman (02:13:41) No. There’s no evidence that there’s anything harder or easy about any language learned, by three or four, they speak that language. So, there’s no evidence of anything hard or easy about human language. They’re all kind of equal.

Nature vs nurture

Edward Gibson (02:13:54) To what degree is language, this is returning to Chomsky a little bit, is innate. You said that for Chomsky, you used the idea that language is, some aspects of language are innate to explain away certain things that are observed. How much are we born with language at the core of our mind brain?
Lex Fridman (02:14:18) The answer is, I don’t know, of course. I’m an engineer at heart, I guess and I think it’s fine to postulate that a lot of it’s learned. And so I’m guessing that a lot of it’s learned. I think the reason Chomsky went with innateness is because he hypothesized movement in his grammar. He was interested in grammar and movement’s hard to learn. I think he’s right movement. It’s a hard thing to learn, to learn these two things together and how they interact. And there’s a lot of ways in which you might generate exactly the same sentences and it’s really hard.
(02:14:52) And so he’s like, “Oh, I guess it’s not learned. It’s innate.” And if you just throw out the movement and just think about that in a different way, then you get some messiness. But the messiness is human language, which it actually fits better. That messiness isn’t a problem. It’s actually, it’s a valuable asset of the theory. And so I think I don’t really see a reason to postulate much innate structure. And that’s kind of why I think these large language models are learning so well is because I think you can learn the form, the forms of human language from the input. I think that’s likely to be true.
Edward Gibson (02:15:34) So, that part of the brain that lights up when you’re doing all the comprehension, that could be learned? That could be just, you don’t need any-
Lex Fridman (02:15:40) Yeah, it doesn’t have to be an innate, so lots of stuff is modular in the brain that’s learned. So, there’s something called the visual word form area in the back, and so it’s in the back of your head near the visual cortex. And that is very specialized brain area, which does visual word processing if you read, if you’re a reader. If you don’t read, you don’t have it. Guess what? You spend some time learning to read and you develop that brain area, which does exactly that. And so the modularization is not evidence for innateness. So, the modularization of a language area doesn’t mean we’re born with it. We could have easily learned that. We might’ve been born with it. We just don’t know at this point. We might very well have been born with this left lateralized area.
(02:16:31) There’s a lot of other interesting components here, features of this kind of argument. Some people get a stroke or something goes really wrong on the left side where the language area would be and that isn’t there. It’s not available. It develops just fine in the right. So, it’s not about the left. It goes to the left. This is a very interesting question. It’s like why are any of the brain areas the way that they are and how did they come to be that way? There’s these natural experiments which happen where people get these strange events in their brains at very young ages, which wipe out sections of their brain, and they behave totally normally and no one knows anything was wrong. And we find out later, because they happen to be accidentally scanned for some reason. It’s like what happened to your left hemisphere? It’s missing.
(02:17:21) There’s not many people who have missed their whole left hemisphere, but they’ll be missing some other section of their left or their right. And they behave absolutely normally, you would never know. So, that’s a very interesting current research. This is another project that this person, Ev Fedorenko is working on. She’s got all these people contacting her because she’s scanned some people who have been missing sections. One person missed a section of her brain and was scanned in her lab, and she happened to be a writer for the New York Times.
(02:17:50) And there was an article in New York Times just about the scanning procedure, about what might be learned by the general process of MRI and language, not necessarily language. And because she’s writing for the New York Times, then all these people started writing to her who also have similar kinds of deficits because they’ve been accidentally scanned for some reason and found out they’re missing some section. And they say they volunteer to be scanned.
Edward Gibson (02:18:22) These are natural experiments, you said?
Lex Fridman (02:18:22) Natural experiments. They’re kind of messy, but natural experiments, it’s kind of cool.
Edward Gibson (02:18:27) The brain.
Lex Fridman (02:18:28) She calls it Interesting Brains.
Edward Gibson (02:18:29) The first few hours, days, months of human life are fascinating. Well, inside the womb actually, that development, that machinery, whatever that is, seems to create powerful humans that are able to speak, comprehend, think, all that kind of stuff, no matter what happens… Not no matter what, but robust to the different ways that the brain might be damaged and so on. That’s really interesting. But what would Chomsky say about the fact, the thing you’re saying now, that language seems to be happening separate from thought? Because as far as I understand, maybe you can correct me, he thought that language underpins a thought.
Lex Fridman (02:19:13) Yeah, he thinks so. I don’t know what he’d say.
Edward Gibson (02:19:15) He would be surprised, because for him, the idea is that language is the foundation of thought.
Lex Fridman (02:19:21) That’s right. Absolutely.
Edward Gibson (02:19:23) It’s pretty mind-blowing to think that it could be completely separate from thought.
Lex Fridman (02:19:28) That’s right. So, he’s basically a philosopher, philosopher of language in a way, thinking about these things. It’s a fine thought. You can’t test it in his methods. You can’t do a thought experiment to figure that out. You need a scanner, you need brain-damaged people. You need ways to measure that. And that’s what FMRI offers. And patients are a little messier. FMRI is pretty unambiguous, I’d say. It’s very unambiguous. There’s no way to say that the language network is doing any of these tasks. There’s-
Lex Fridman (02:20:00) The language network is doing any of these tasks, you should look at those data. It’s like there’s no chance that you can say that those networks are overlapping. They’re not overlapping, they’re just completely different. And so you can always make, oh, it’s only two people, it’s four people or something for the patients, and there’s something special about them we don’t know. But these are just random people and with lots of them, and you find always the same effects and it’s very robust, I’d say.

Culture and language

Edward Gibson (02:20:29) Well, that’s a fascinating effect. You mentioned Bolivia. What’s the connection between culture and language? You’ve also mentioned that much of our study of language comes from W-E-I-R-D, WEIRD people, western, educated, industrialized rich, and democratic. So when you study remote cultures such as around the Amazon jungle, what can you learn about language?
Lex Fridman (02:21:02) So that term WEIRD is from Joe Henrich. He’s at Harvard. He’s a Harvard evolutionary biologist. And so he works on lots of different topics and he basically was pushing that observation that we should be careful about the inferences we want to make when we’re in psychology or mostly in psychology, I guess, about humans. If we’re talking about undergrads at MIT and Harvard, those aren’t the same. These aren’t the same things. And so if you want to make inferences about language, for instance, there’s a lot of other kinds of languages in the world than English and French and Chinese. And so maybe for language, we care about how culture, because cultures can be very, I mean, of course English and Chinese cultures are very different, but hunter-gatherers are much more different in some ways. And so if culture has an effect on what language is, then we kind of want to look there as well as looking.
(02:22:06) It’s not like the industrialized cultures aren’t interesting, of course they are, but we want to look at non-industrialized cultures as well. And so I’ve worked with two, I’ve worked with the Tsimane, which are in Bolivia and Amazon, both in the Amazon in these cases. And there are so-called farmer-foragers, which is not hunter-gatherers, sort of one-up from hunter-gatherers in that they do a little bit of farming as well, a lot of hunting as well, but a little bit of farming. And the kind of farming they do is the kind of farming that I might do if I ever were to grow tomatoes or something in my backyard. So it’s not big field farming, it’s just farming for a family. A few things you do that. So that’s the kind of farming they do.
(02:22:49) And the other group I’ve worked with are the Piraha, which are also in the Amazon and happened to be in Brazil. And that’s with a guy called Dan Everett, who was a linguist anthropologist who actually lived and worked in the, I mean, he was a missionary actually, initially back in the seventies working with trying to translate languages so they could teach them the Bible, teach them Christianity.
Edward Gibson (02:23:15) What can you say about that?
Lex Fridman (02:23:16) Yeah, so the two groups I’ve worked with, the Tsimane and the Piraha are both isolate languages, meaning there’s no known connected languages at all, just on their own. Yeah, there’s a lot of those. And most of the isolates occur in the Amazon or in Papua, New Guinea, these places where the world has sort of stayed still for a long enough. So there aren’t earthquakes there. Well, certainly no earthquakes in the Amazon jungle. And the climate isn’t bad, so you don’t have droughts. And so in Africa, you’ve got a lot of moving of people because there’s drought problems. So they get a lot of language contact when people have to, you got to move because you’ve got no water, then you’ve got to get going. And then you run into contact with other tribes, other groups.
(02:24:13) In the Amazon, that’s not the case. And so people can stay there for hundreds and hundreds and probably thousands of years, I guess. And so these groups, the Tsimane and the Piraha are both isolates in that. And I guess they’ve just lived there for ages and ages with minimal contact with other outside groups. So I mean, I’m interested in them because they are, in these cases, I’m interested in their words. I would love to study their syntax, their orders of words, but I’m mostly just interested in how languages are connected to their cultures in this way. And so, with the Piraha, they’re most interesting, I was working on number there, number information.
(02:24:54) And so the basic idea is I think language is invented. This, what I get from the words here is that I think language is invented. We talked about color earlier. It’s the same idea. So that what you need to talk about with someone else is what you’re going to invent words for. And so we invent labels for colors, not that I can see, but the things I need to tell you about so that I can get objects from you or get you to give me the right objects. And I just don’t need a word for teal or a word for aquamarine in the Amazon jungle for the most part because I don’t have two things which differ on those colors. I just don’t have that. And so numbers are really another fascinating source of information here where you might naively, I certainly thought that all humans would have words for exact counting, and the Piraha don’t. Okay, so they don’t have any words for even one. There’s not a word for one in their language. And so there’s certainly not word for two, three or four. So that kind of blows people’s minds often.
Edward Gibson (02:25:59) Yeah, that’s blowing my mind.
Lex Fridman (02:26:00) That’s pretty weird, isn’t it?
Edward Gibson (02:26:02) How are you going to ask, I want two of those?
Lex Fridman (02:26:03) You just don’t. And so that’s just not a thing you can possibly ask in the Piraha, it’s not possible. There’s no words for that. So here’s how we found this out. So it was thought to be a one, two, many language. There are three words for quantifiers for sets, and people had thought that those meant one, two, and many. But what they really mean is few, some and many. Many is correct. It’s few, some and many. And so the way we figured this out, and this is kind of cool, is that we gave people, we had a set of objects. These happen to be spools of thread. It doesn’t really matter what they are, identical objects, and I sort of start off here. I just give you one of those and say, what’s that? Okay, so you’re a Piraha speaker and you tell me what it is, and then I give you two and say, what’s that?
(02:26:51) And nothing’s changing in the set except for the number. And then I just ask you to label these things. And we just do this for a bunch of different people. And frankly, I did this task.
Edward Gibson (02:27:01) This is fascinating.
Lex Fridman (02:27:02) And it’s a little bit weird. So they say the word that we thought was one, it’s few, but for the first one, and then maybe they say few, or maybe they say some for the second, and then for the third or the fourth, they start using the word many for the set. And then 5, 6, 7, 8, I go all the way to 10 and it’s always the same word. And they look at me like I’m stupid because they told me what the word was for 6, 7, 8, and going to continue asking them at nine and 10. I’m like, I’m sorry. They understand that I want to know their language. That’s the point of the task is I’m trying to learn their language, so that’s okay. But it does seem like I’m a little slow because they already told me what the word for many was, 5, 6, 7, and I keep asking.
(02:27:43) So it’s a little funny to do this task over and over. We did this with a guy called, Dan was our translator. He’s the only one who really speaks Piraha fluently. He’s a good bilingual for a bunch of languages, but also English and then a guy called Mike Frank was also a student with me down there, he and I did these things. So you do that and everyone does the same thing. We ask 10 people, and they all do exactly the same labeling for one up. And then we just do the same thing down on random order. Actually, we do some of them up, some of them down first, instead of one to 10, we do 10 down to one. I give them 10, 9, at 8, they start saying the word for some. And then when you get to four, everyone is saying the word for few, which we thought was one. So the context determined what that quantifier they used was. So it’s not a count word. They’re not count words, they’re just approximate words-
Edward Gibson (02:28:41) And they’re going to be noisy when you interview a bunch of people, the definition of few. And there’s going to be a threshold in the context.
Lex Fridman (02:28:48) Yeah, I don’t know what that means. That’s going to depend on the context. I think that’s true in English too. If you ask an English person what a few is, I mean, that’s depend on the context.
Edward Gibson (02:28:56) And it might actually be at first hard to discover because for a lot of people, the jump from one to two will be few. Right? So it’s the jump.
Lex Fridman (02:29:05) Yeah, it might be still be there. Yeah.
Edward Gibson (02:29:07) I mean that’s fascinating. That’s fascinating. The numbers don’t present themselves.
Lex Fridman (02:29:11) So the words aren’t there. And so then we did these other things. Well, if they don’t have the words, can they do exact matching kinds of tasks? Can they even do those tasks? And the answer is sort of yes and no. And so yes, they can do them. So here’s the tasks that we did. We put out those spools of thread again. So I even put three out here. And then we gave them some objects, and those happened to be uninflated red balloons. It doesn’t really matter what they are, they’re a bunch of exactly the same thing. And it was easy to put down right next to these spools of thread. And so then I put out three of these, and your task was to just put one against each of my three things, and they could do that perfectly. So I mean, I would actually do that.
(02:29:55) It was a very easy task to explain to them, because I did this with this guy, Mike Frank, and I’d be the experimenter telling him to do this and showing him to do this. And then we just, just do it what he did. You’ll copy him all we had to, I didn’t have to speak Piraha except for know what copy him. Do what he did is all we had to be able to say. And then they would do that just perfectly. And so we’d move it up. We’d do some sort of random number of items up to 10, and they basically do perfectly on that. They’d never get that wrong. I mean, that’s not a counting task that is just a match. You just put one against it doesn’t matter how many, I don’t need to know how many there are there to do that correctly. And they would make mistakes, but very, very few and no more than MIT undergrads, just going to say, these are low stakes. So you make mistakes.
Edward Gibson (02:30:41) Counting is not required to complete the matching class.
Lex Fridman (02:30:45) That’s right. Not at all. And so that’s our control. And this, a guy had gone down there before and said that they couldn’t do this task, but I just don’t know what he did wrong there. They can do this task perfectly well, and I can train my dog to do this task. So of course they can do this task. And so it’s not a hard task. But the other task that was sort of more interesting is so then we do a bunch of tasks where you need some way to encode the set. So one of them is just, I just put a opaque sheet in front of the things. I put down a bunch, a set of these things, and I put an opaque sheet down. And so you can’t see them anymore. And I tell you, do the same thing you were doing before. It’s easy if it’s two or three, it’s very easy, but if I don’t have the words for eight, it’s a little harder maybe with practice. Well, no.
Edward Gibson (02:31:36) Because you have to count-
Lex Fridman (02:31:37) For us, it’s easy because we just count them. It’s just so easy to count them. But they don’t, they can’t count them because they don’t count. They don’t have words for this thing. And so they would do approximate. It’s totally fascinating. So they would get them approximately right after four or five, because basically you always get four right, three or four that looks, that’s something we can visually see. But after that, you have its approximate number. And there’s a bunch of tasks we did, and they all failed. I mean, failed. They did approximate after five on all those tasks. And it kind of shows that the words, you kind of need the words to be able to do these kinds of tasks.
Edward Gibson (02:32:17) But there’s a little bit of a chicken and egg thing there, because if you don’t have the words, then maybe they’ll limit you in the kind of a little baby Einstein there won’t be able to come up with a counting task. You know what I mean? The ability to count enables you to come up with interesting things probably. So yes, you develop counting because you need it, but then once you have counting, you can probably come up with a bunch of different inventions, how to, I don’t know what kind of thing they do matching really well for building purposes, building some kind of hut or something like this. So it’s interesting that language is a limiter on what you’re able to do.
Lex Fridman (02:33:01) Yeah, language is the words. Here is the words. The words for exact count is the limiting factor here. They just don’t have them in this-
Edward Gibson (02:33:11) But that’s what I mean. That limit is also a limit on the society of what they’re able to build.
Lex Fridman (02:33:19) That’s going to be true. Yeah. I mean, we don’t know. This is one of those problems with the snapshot of just current languages is that we don’t know what causes a culture to discover/ invent accounting system. But the hypothesis is, the guess out there is something to do with farming. So if you have a bunch of goats and you want to keep track of them, and you have saved 17 goats and you go to bed at night and you get up in the morning, boy, it’s easier to have a count system to do that. That’s an abstraction over a set. So don’t have, people often ask me when I tell them about this kind of work, and they say, well, don’t these people have… Don’t they have kids? They have a lot of children. I’m like, yeah, they have a lot of children. And they do. They often have families of three or four, five kids, and they go, well, they need the numbers to keep track of their kids. And I always ask this person who says this, do you have children? And the answer is always no, because that’s not how you keep track of your kids. You care about their identities. It’s very important to me when I go, I have five children, it doesn’t matter.
Edward Gibson (02:34:20) You don’t think one, two, three, four?
Lex Fridman (02:34:21) It matters which five. If you replaced one with someone else, I would care. A goat, maybe not. Right? That’s the kind of point. It’s an abstraction. Something that looks very similar to the one wouldn’t matter to me probably.
Edward Gibson (02:34:33) But, if you care about goats, you’re going to know them actually individually also.
Lex Fridman (02:34:37) Yeah, you will.
Edward Gibson (02:34:38) I mean, cows, goats, if it’s a source of food and milk and all that kind stuff. You’re going to actually care-
Lex Fridman (02:34:42) Yeah, yeah, yeah. You’re actually, you’re absolutely right. But I’m saying it is an abstraction such that you don’t have to care about their identities to do this thing fast. That’s the hypothesis, not mine from anthropologists are guessing about where words for counting came from is from farming maybe. Any way…

Universal language

Edward Gibson (02:34:57) Yeah. Do you have a sense why universal languages like Esperanto have not taken off? Why do we have all these different languages?
Lex Fridman (02:35:08) Well, my guess is the function of a language is to do something in a community. I mean, unless there’s some function to that language in the community, it’s not going to survive. It’s not going to be useful. So here’s a great example. Language death is super common. Okay? Languages are dying all around the world, and here’s why they’re dying. It’s like, yeah, I see this. It’s not happening right now in either the Tsimane or the Piraha, but it probably will. So there’s a neighboring group called Moseten, which is, I said that it’s isolate. It’s actually there’s a dual, there’s two of them. So it’s actually, there’s two languages which are really close, which are Moseten and Tsimane, which are unrelated to anything else. And Moseten is unlike Tsimane in that it has a lot of contact with Spanish and it’s dying, so that language is dying. The reason it’s dying is there’s not a lot of value for the local people in their native language.
(02:36:06) So there’s much more value in knowing Spanish because they want to feed their families. And how do you feed your family? You learn Spanish so you can make money so you can get a job and do these things, and then you make money. And so they want Spanish things. And so Moseten is in danger and is dying, and that’s normal. Basically, the problem is that people, the reason we learn language is to communicate. We use it to make money and to do whatever it is to feed our families. If that’s not happening, then it won’t take off. It’s not like a game or something. This is something we, why is English so popular? It’s not because it’s an easy language to learn. Maybe it is, I don’t really know. But that’s not why it’s popular.
Edward Gibson (02:36:54) But because the United States is gigantic economy therefore-
Lex Fridman (02:36:57) Yeah, it’s big economies that do this. It’s all it is. It’s all about money. And that’s what, so there’s a motivation to learn Mandarin. There’s a motivation to learn Spanish. There’s a motivation to learn English. These languages are very valuable to know because there’s so many speakers all over the world.
Edward Gibson (02:36:58) That’s fascinating.
Lex Fridman (02:37:13) There’s less of a value economically. It’s kind of what drives this, it’s not just for fun. I mean, there are these groups that do want to learn language just for language’s sake, and there’s something to that. But those are rarities in general. Those are a few small groups that do that. Not most people don’t do that.
Edward Gibson (02:37:32) Well, if that was a primary driver, then everybody was speaking English or speaking one language. There’s also a tension-
Lex Fridman (02:37:38) That’s happening.
Edward Gibson (02:37:40) Well, that-
Lex Fridman (02:37:41) We’re moving towards fewer and fewer languages. Exactly.
Edward Gibson (02:37:43) We are. I wonder if, you’re right. Maybe, this is slow, but maybe that’s where we’re moving, but there is a tension. You’re saying language that infringes, but if you look at geopolitics and superpowers, it does seem that there’s another thing in tension, which is a language is a national identity sometimes for certain nations. That’s the war in Ukraine, language, Ukrainian language is a symbol of that war in many ways, like a country fighting for its own identity. So it’s not merely the convenience. I mean, those two things are at attention is the convenience of trade and the economics and be able to communicate with neighboring countries and trade more efficiently with neighboring countries, all that kind of stuff. But also identity of the group.
Lex Fridman (02:38:30) That’s right. I completely agree.
Edward Gibson (02:38:32) Because language is the way for every community like dialects that emerge are a kind of identity for people and sometimes a way for people to say F you to the more powerful people. It’s interesting. So in that way, language can be used as that tool.
Lex Fridman (02:38:51) Yeah, I completely agree. And there’s a lot of work to try to create that identity. So people want to do this. As a cognitive scientist and language expert, I hope that continues because I don’t want languages to die. I want languages to survive because they’re so interesting for so many reasons. But I mean, I find them fascinating just for the language part, but I think there’s a lot of connections to culture as well, which is also very important.

Language translation

Edward Gibson (02:39:21) Do you have hope for machine translation that it can break down the barriers of language? So while all these different diverse languages exist, I guess there’s many ways of asking this question, but basically how hard is it to translate in an automated way for one language to another?
Lex Fridman (02:39:40) There’s going to be cases where it’s going to be really hard. So there are concepts that are in one language and not another. The most extreme kinds of cases are these cases of number information. So good luck translating a lot of English into Piraha. It’s just impossible. There’s no way to do it because there are no words for these concepts that we’re talking about. There’s probably the flip side. There’s probably stuff in Piraha, which is going to be hard to translate into English on the other side. And so I just don’t know what those concepts are. The space, the world space is a little different from my world space, so I don’t know what the things they talk about, things it’s going to have to do with their life as opposed to my industrial life, which is going to be different. And so there’s going to be problems like that always. Maybe it’s not so bad in the case of some of these spaces, and maybe it’s going to be hard or others. And so it’s pretty bad in number. It’s extreme, I’d say in the number space, exact number space. But in the color dimension, that’s not so bad. But it’s a problem that you don’t have to talk about the concepts.
Edward Gibson (02:40:49) And there might be entire concepts that are missing. So to you, it’s more about the space of concept versus the space of form. Like form you can probably map.
Lex Fridman (02:40:58) Yes. Yeah. So you were talking earlier about translation and about how translations, there’s good and bad translations. I mean, now we’re talking about translations of form, right? So what makes writing good, right? It’s not-
Edward Gibson (02:40:58) There’s the music to the form.
Lex Fridman (02:41:14) It’s not just the content, it’s how it’s written and translating that that sounds difficult.
Edward Gibson (02:41:22) We shouldn’t should say that, there is, I hesitate to say meaning, but there’s a music and a rhythm to the form. When you look at the broad picture, like the difference between Dostoevsky and Tolstoy or Hemingway, Bukowski, James Joyce, like I mentioned, there’s a beat to it. There’s an edge to it that is in the form.
Lex Fridman (02:41:46) We can probably get measures of those.
Edward Gibson (02:41:47) Yeah.
Lex Fridman (02:41:48) I don’t know.
Edward Gibson (02:41:49) That’s interesting.
Lex Fridman (02:41:50) I’m optimistic that we could get measures of those things. And so maybe that’s-
Edward Gibson (02:41:54) Translatable?
Lex Fridman (02:41:54) I don’t know. I don’t know though. I have not worked on that.
Edward Gibson (02:41:58) Actually, I would love to see you translate-
Lex Fridman (02:41:58) That sounds totally fascinating.
Edward Gibson (02:42:00) Translation to, I mean, Hemingway is probably the lowest. I would love to see different authors, but the average per sentence dependency length for Hemingway is probably the shortest.
Lex Fridman (02:42:14) That’s your sense, huh? It’s simple sentences?
Edward Gibson (02:42:17) Simple short sentences-
Lex Fridman (02:42:18) Short. Yeah. Yeah.
Edward Gibson (02:42:19) I mean, that’s one. If you have really long sentences, even if they don’t have center embedding-
Lex Fridman (02:42:23) They can have longer connections.
Edward Gibson (02:42:26) They can have longer connections.
Lex Fridman (02:42:26) They don’t have to. You can’t have a long, long sentence with a bunch of local words, but it is much more likely to have the possibility of long dependencies with long sentences. Yeah.

Animal communication

Edward Gibson (02:42:37) I met a guy named Aza Raskin, who does a lot of cool stuff, really brilliant, works with Tristan Harris on a bunch of stuff, but he was talking to me about communicating with animals. He co-founded Earth Species Project where you’re trying to find the common language between whales, crows and humans. And he was saying that there’s a lot of promising work that even though the signals are very different, the actual, if you have embeddings of the languages, they’re actually trying to communicate similar type things. Is there something you can comment on that? Is there promise to that in everything you’ve seen in different cultures, especially remote cultures, that this is a possibility or no? That we can talk to whales?
Lex Fridman (02:43:28) I would say yes. I think it’s not crazy at all. I think it’s quite reasonable. There’s this sort of weird view, well, odd view, I think that to think that human language is somehow special. I mean, maybe it is. We can certainly do more than any of the other species, and maybe our language system is part of that. It’s possible. But people have often talked about how, like Chomsky, in fact, has talked about how human, only human language has this compositionality thing that he thinks is sort of key in language. And the problem with that argument is he doesn’t speak whale, and he doesn’t speak crow, and he doesn’t speak monkey. They say things like, well, they’re making a bunch of grunts and squeaks. And their reasoning is like, that’s bad reasoning. I’m pretty sure if you asked a whale what we’re saying, they’d say, well, I’m making a bunch of weird noises.
Edward Gibson (02:44:31) Exactly.
Lex Fridman (02:44:32) And so it’s like, this is a very odd reasoning to be making that human language is special because we’re the only ones who have human language. I’m like, well, we don’t know what those other, we can’t talk to them yet. And so there are probably a signal in there, and it might very well be something complicated like human language. I mean, sure with a small brain in lower species, there’s probably not a very good communication system. But in these higher species where you have what seems to be abilities to communicate something, there might very well be a lot more signal there than we might’ve otherwise thought.
Edward Gibson (02:45:11) But also if we have a lot of intellectual humility here, somebody formerly from MIT and Neri Oxman, who I admire very much, has talked a lot about, has worked on communicating with plants. So yes, the signal there is even less than, but it’s not out of the realm of possibility that all nature has a way of communicating. And it’s a very different language, but they do develop a kind of language through the chemistry, through some way of communicating with each other. And if you have enough humility about that possibility, I think it would be a very interesting, in a few decades, maybe centuries, hopefully not a humbling possibility of being able to communicate not just between humans effectively, but between all of living things on earth.
Lex Fridman (02:46:04) Well, I mean, I think some of them are not going to have much interesting to say-
Edward Gibson (02:46:07) But [inaudible 02:46:07] still?
Lex Fridman (02:46:07) But some of them will. We don’t know. We certainly don’t know. I think-
Edward Gibson (02:46:11) I think if we’re humble, there could be some interesting trees out there.
Lex Fridman (02:46:17) Well, they’re probably talking to other trees, right? They’re not talking to us. And so to the extent they’re talking, they’re saying something interesting to some other conspecific as opposed to us. And so there probably is, there may be some signal there. So there are people out there. Actually, it’s pretty common to say that human language is special and different from any other animal communication system, and I just don’t think the evidence is there for that claim. I think it’s not obvious. We just don’t know because we don’t speak these other communication systems until we get better. I do think there are people working on that, as you pointed out though, people working on whale speak, for instance. That’s really fascinating.
Edward Gibson (02:47:02) Let me ask you a wild out there sci-fi question. If we make contact with an intelligent alien civilization and you get to meet them, how surprised would you be about their way of communicating? Do you think you would be recognizable? Maybe there’s some parallels here to when you go to the remote tribes.
Lex Fridman (02:47:23) I would want Dan Everett with me. He is amazing at learning foreign languages, and so this is an amazing feat to be able to go, this is a language, the Piraha, which has no translators before him. I mean, there-
Edward Gibson (02:47:36) Oh, wow. So he just shows up?
Lex Fridman (02:47:36) He was a missionary that went there. Well, there was a guy that had been there before, but he wasn’t very good. And so he learned the language far better than anyone else had learned before him. He’s good at, he’s a very social person. I think that’s a big part of it is being able to interact. So I don’t know. It kind of depends on this species from outer space, how much they want to talk to us.
Edward Gibson (02:47:58) Is there something you could say about the process he follows? How do you show up to a tribe and socialize? I mean, I guess colors and counting is one of the most basic things to figure out.
Lex Fridman (02:48:07) You start that. You actually start with objects and just say, just throw a stick down and say stick. And then you say, what do you call this? And then they’ll say the word, whatever, and he says, a standard thing to do is to throw two sticks. Two sticks. And then he learned pretty quick that there weren’t any count words in this language because they didn’t know, this wasn’t interesting. I mean, it was kind of weird, they’d say some or something, the same word over and over again. But that is a standard thing. You just try to, but you have to be pretty out there socially willing to talk to random people, which these are really very different people from you. And he is very social. And so I think that’s a big part of this is that’s how a lot of people know a lot of languages is they’re willing to talk to other people.
Edward Gibson (02:48:50) That’s a tough one where you just show up knowing nothing.
Lex Fridman (02:48:53) Yeah. Oh god.
Edward Gibson (02:48:54) It’s beautiful that humans are able to connect in that way. You’ve had an incredible career exploring this fascinating topic. What advice would you give to young people about how to have a career like that or a life that they can be proud of?
Lex Fridman (02:49:11) When you see something interesting, just go and do it. I do that. That’s something I do, which is kind of unusual for most people. So when I saw the Piraha, if Piraha was available to go and visit, I was like, yes, yes, I’ll go. And then when we couldn’t go back, we had some trouble with the Brazilian government, there’s some corrupt people there. It was very difficult to go back in there. And so I was like, all right, I got to find another group. And so we searched around and we were able to find the, because I wanted to keep working on this kind of problem, and so we found the Tsimane and just go there. We didn’t have content. We had a little bit of contact and brought someone, and you just kind of try things. I say it’s like a lot of that’s just like ambition. Just try to do something that other people haven’t done. Just give it a shot, is what I mean. I do that all the time. I don’t know.
Edward Gibson (02:49:58) I love it. And I love the fact that your pursuit of fun has landed You here talking to me. This was an incredible conversation that you’re just a fascinating human being. Thank you for taking a journey through human language with me today. This is awesome.
Lex Fridman (02:50:13) Thank you very much, Lex, it’s been a pleasure.
Edward Gibson (02:50:16) Thanks for listening to this conversation with Edward Gibson. To support this podcast, please check out our sponsors in the description. And now let me leave you with some words from Wittgenstein. The limits of my language mean the limits of my world. Thank you for listening and hope to see you next time.