Transcript for Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans | Lex Fridman Podcast #392

This is a transcript of Lex Fridman Podcast #392 with Joscha Bach. The timestamps in the transcript are clickable links that take you directly to that point in the main video. Please note that the transcript is human generated, and may have errors. Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation. Click link to jump approximately to that part in the transcript:

Introduction

Joscha Bach (00:00:00) There is a certain perspective where you might be thinking, what is the longest possible game that you could be playing? A short game is, for instance, cancer is playing a shorter game than your organism. Cancer is an organism playing a shorter game than the regular organism. Because the cancer cannot procreate beyond the organism, except for some infectious cancers like the ones that eradicated the Tasmanian devils, you typically end up with a situation where the organism dies together with the cancer, because the cancer has destroyed the larger system due to playing a shorter game. Ideally, you want to, I think, build agents that play the longest possible games. The longest possible games is to keep entropy at bay as long as possible, by doing interesting stuff.
Lex Fridman (00:00:48) The following is a conversation with Joscha Bach, his third time on this podcast. Joscha is one of the most brilliant, and fascinating minds in the world, exploring the nature of intelligence, consciousness, and computation. He’s one of my favorite humans to talk to about pretty much anything and everything. This is the Lex Fridman Podcast. To support it, please check out our sponsors in the description. Now, dear friends, here’s Joscha Bach.

Stages of life

(00:01:15) You wrote a post about levels of lucidity. “As we grow older, it becomes apparent that our self-reflexive mind is not just gradually accumulating ideas about itself, but that it progresses in somewhat distinct stages.” There are seven of the stages. Stage one, reactive survival (infant). Stage two, personal self (young child). Stage three, social self (adolescence, domesticated adult). Stage four is rational agency (self-direction). Stage five is self-authoring, that’s full adult. You’ve achieved wisdom, but there’s two more stages. Stage six is enlightenment, stage seven is transcendence. Can you explain each, or the interesting parts of each of these stages, and what’s your sense why there are stages of this, of lucidity as we progress through life in this too short life?
Joscha Bach (00:02:12) This model is derived from concept by the psychologist Robert Kegan, and he talks about the development of the self as a process that happens in principle by some kind of reverse engineering of the mind, where you gradually become aware of yourself, and thereby build structure that allows you to interact deeper with the world and yourself. I found myself using this model not so much as a developmental model. I’m not even sure if it’s a very good developmental model, because I saw my children not progressing exactly like that. I also suspect that you don’t go through these stages necessarily in succession, and it’s not that you work through one stage and then you get into the next one. Sometimes, you revisit them. Sometimes, stuff is happening in parallel. But it’s, I think, a useful framework to look at what’s present, and the structure of a person, and how they interact with the world, and how they relate to themselves.
(00:03:08) It’s more like philosophical framework that allows you to talk about how minds work. At first, when we are born, we don’t have a personal self yet, I think. Instead, we have an attentional self, and this attentional self is initially in the infant tasked, is building a world model, and also an initial model of the self. But mostly, it’s building a game engine in the brain that is tracking sensory data, and uses it to explain it. In some sense, you could compare it to a game engine like Minecraft or so, colors and sounds. People are all not physical objects. They’re creation of our mind at a certain level. Of course, screening models that are mathematical that use geometry, and that use manipulation of objects, and so on to create scenes in which we can find ourselves, and interact with them.
Lex Fridman (00:03:59) Minecraft?
Joscha Bach (00:04:00) Yeah. This personal self is something that is more or less created after the world is finished, after it’s trained into the system, after it has been constructed. This personal self is an agent that interacts with the outside world. The outside world is not the world of quantum mechanics, not the physical universe, but it’s the model that has been generated in our own mind, right? This is us, and we experience ourself interacting with that outside world that is created inside of our own mind. Outside of ourself, there’s feelings, and they presented our interface with this outside world. They pose problems to us. These feelings are basically attitudes that our mind is computing, that tell us what’s needed in the world, the things that we are drawn to, the things that we are afraid of. We are tasked with solving this problem of satisfying the needs, avoiding the aversions, following on our inner commitments and so on, and also modeling ourselves, and building the next stage.
(00:05:02) After we have this personal self and stage two online, many people form a social self. This social self allows the individual to experience themselves as part of a group. It’s basically this thing that when you are playing in a team, for instance, you don’t notice yourself just as a single node that is reaching out into the world, but you’re also looking down. You’re looking down from this entire group, and you see how this group is looking at this individual, and everybody in the group is, in some sense, emulating this group spirit to some degree. In this state, people are forming their opinions by assimilating them from this group mind. They basically gain the ability to act a little bit like a hive mind.
Lex Fridman (00:05:43) But are you also modeling the interaction of how opinion shapes and forms through the interaction of the individual nodes within the group?
Joscha Bach (00:05:51) Yeah. Basically, the way in which people do it in this stage is that they experience what are the opinions of my environment. They experience the relationship that they have to their environment, and they resonate with people around them, and get more opinions through this interaction to the way in which they relate to others. At stage four, you basically understand that stuff is true and false independently, what other people believe, and you have agency over your own beliefs. In that stage, you basically discover epistemology, the rules about determining what’s true and false.
Lex Fridman (00:06:28) You start to learn how to think?
Joscha Bach (00:06:30) Yes. I mean, at some level, you’re always thinking you are constructing things, and I believe that this ability to reason about your mental representation is what we mean by thinking. It’s an intrinsically reflexive process that requires consciousness. Without consciousness, you cannot think. You can generate the content of feelings, and so on outside of consciousness. It’s very hard to be conscious of how your feelings emerge, at least in the early stages of development. But thoughts is something that you always control. If you are a nerd like me, you often have to skip stage three, because you’d like the intuitive empathy with others. Because in order to resonate with a group, you need to have a quite similar architecture. If people are wired differently, then it’s hard for them to resonate with other people, and basically have empathy, which is not the same as compassion, but it is a shared perceptual mental state. Empathy happens not just via inference about the mental states of others, but it’s a perception of what other people feel, and where they’re at.
Lex Fridman (00:07:35) Can’t you not have empathy while also not having a similar architecture, cognitive architecture as the others in the group?
Joscha Bach (00:07:41) I think, yes. I experienced that too. But you need to build something that is like a meta architecture. You need to be able to embrace the architecture of the other to some degree, or find some shared common ground. It’s also this issue that, if you are a nerd nomis, often people, basically neurotypical people have difficulty to resonate with you. As a result, they have difficulty understanding you, unless they have enough wisdom to feel what’s going on there.
Lex Fridman (00:08:08) Well, isn’t the whole process of the stage three to figure out the API to the other humans that have different architecture, and you yourself publish public documentation for the API that people can interact with for you? Isn’t this the whole process of socializing?
Joscha Bach (00:08:26) My experience as a child growing up was that I did not find any way to interface with the stage-three people, and they didn’t do that with me, so took me-
Lex Fridman (00:08:36) Did you try?
Joscha Bach (00:08:36) Yeah, of course, I tried it very hard. But it was only when I entered the mathematics school at the ninth grade, where lots of other nerds were present, that I found people that I could deeply resonate with, and had the impression that, yes, I have friends now. I found my own people. Before that, I felt extremely lonely in the world. There was basically nobody I could connect to. I remember, there was one moment in all these years, where I was in… There was a school exchange, and it was a Russian boy, a kid from the Russian garrison stationed in Eastern Germany who visited our school, and we played a game of chess against each other, and we looked into each other’s eyes, and we sat there for two hours playing this game of chess. I had the impression, this is the human being, he understands what I understand, we didn’t even speak the same language.
Lex Fridman (00:09:29) I wonder if your life could have been different if you knew that it’s okay to be different, to have a different architecture, whether accepting that the interface is hard to figure out, it takes a long time to figure out and it’s okay to be different. In fact, it’s beautiful to be different.
Joscha Bach (00:09:50) It was not my main concern. My main concern was mostly that I was alone. It was not the so much the question, is it okay to be the way I am? I couldn’t do much about it, so I had to deal with it. But my main issue was that I was not sure if I would ever meet anybody growing up that I would connect to at such a deep level that I would feel that I could belong.
Lex Fridman (00:10:13) So there’s a visceral, undeniable feeling of being alone?
Joscha Bach (00:10:17) Yes. I noticed the same thing when I came into the math school that I think at least half, probably two thirds of these kids were severely traumatized as children growing up, and in large part, due to being alone, because they couldn’t find anybody to relate to.
Lex Fridman (00:10:33) Don’t you think everybody’s alone, deep down?
Joscha Bach (00:10:36) No.
Lex Fridman (00:10:36) No.
Joscha Bach (00:10:36) I’m not alone.
Lex Fridman (00:10:36) Fair enough.
Joscha Bach (00:10:43) I’m not alone anymore. It took me some time to update, and to get over the trauma time and so on, but I felt that in my 20s, I had lots of friends, and I had my place in the world, and I had no longer doubts that I would never be alone again.
Lex Fridman (00:11:00) Is there some aspect to which we’re alone together? You don’t see a deep loneliness in inside yourself still?
Joscha Bach (00:11:06) No. Sorry.
Lex Fridman (00:11:10) Okay. That’s the nonlinear progression through the stages, I suppose. You caught up on stage three at some point.
Joscha Bach (00:11:16) Correct. We’re at stage four, and so basically I find that many nerds jump straight into stage four, bypassing stage three.
Lex Fridman (00:11:22) Do they return to it then, later?
Joscha Bach (00:11:24) Yeah, of course. Sometimes, they do. Not always.
Lex Fridman (00:11:27) Yeah.
Joscha Bach (00:11:27) Their question is basically, do you stay a little bit autistic, or do you catch up? I believe you can catch up. You can build this missing structure, and basically experience yourself as part of a group, learn intuitive empathy, and develop the sense, this perceptual sense of feeling what other people feel. Before that, I could only basically feel this when I was deeply in love with somebody, and we synced.
Lex Fridman (00:11:52) There’s a lot of friction to feeling that way, it’s only with certain people, as opposed to it comes naturally?
Joscha Bach (00:11:59) Yeah.
Lex Fridman (00:11:59) It’s frictionless.
Joscha Bach (00:11:59) But this is something that basically later, I felt, started to resolve itself for me to a large degree.
Lex Fridman (00:12:06) What was the trick?
Joscha Bach (00:12:10) In many ways, growing up, and paying attention. Meditation did help. I had some very crucial experiences in getting close to people, building connections, and cuddling a lot in my student years.
Lex Fridman (00:12:28) Really, paying attention to the what is it, to the feeling another human being fully.
Joscha Bach (00:12:35) Loving other people, and being loved by other people, and building a space in which you can be safe, and can experiment, and touch a lot, and be close to somebody a lot. Over time, basically at some point, you realize, oh, it’s no longer that I feel locked out, but I feel connected, and I experience where somebody else is at. Normally, my mind is racing very fast at a high frequency, so it’s not always working like this. Sometimes works better, sometimes it works less, but also don’t see this as a pressure. It’s more, it’s interesting to observe myself which frequency I’m at, and at which mode somebody else is at.
Lex Fridman (00:13:18) Yeah. Man, the mind is so beautiful in that way. Sometimes, it comes so natural to me, so easy to pay attention, pay attention to the world fully, to other people fully, and sometimes, the stress over silly things is overwhelming. It’s so interesting that the mind is that rollercoaster in that way.

Identity

Joscha Bach (00:13:37) At stage five, you discover how identity is constructed.
Lex Fridman (00:13:40) Self authoring.
Joscha Bach (00:13:41) Realize that your values are not terminal, but they’re instrumental to achieving a world that you like, and aesthetics that you prefer. The more you understand this, the more you get agency over how your identity is constructed, and you realize that identity and interpersonal interaction is a costume, and you should be able to have agency over that costume, right? It’s useful to be a costume, it tells something to others, and it allows to interface in roles. But being locked into this is a big limitation.
Lex Fridman (00:14:13) The word costume implies that it’s fraudulent in some way. Is costume a good word for you, like we present ourselves to the world?
Joscha Bach (00:14:22) In some sense, I learned a lot about costumes at Burning Man. Before that, I did not really appreciate costumes, and saw them more as uniforms like wearing a suit. If you are working in a bank, or if you are trying to get startup funding from a VC in Switzerland, then you dress up in a particular way. This is mostly to show the other side that you are willing to play by the rules, and you understand what the rules are. But there is something deeper when you are at Burning Man, your costume becomes self-expression, and there is no boundary to the self-expression. You’re basically free to wear what you want to express other people, what you feel like this day, and what kind of interactions you want to have.
Lex Fridman (00:15:04) Is the costume a projection of who you are?
Joscha Bach (00:15:10) That’s very hard to say, because the costume also depends on what other people see in the costume. This depends on the context that the other people understand, so you have to create something if you want to, that is legible to the other side and that means something to yourself.
Lex Fridman (00:15:26) Do we become prisoners of the costume, prisoner everybody expects us to?
Joscha Bach (00:15:29) Some people do. But I think that once you realize that you wear a costume at Burning Man, a variety of costumes, realize that you cannot not wear a costume.
Lex Fridman (00:15:40) Yeah.
Joscha Bach (00:15:41) Right. Basically, everything that you wear, and present to others is something that is, to some degree, in addition to what you are deep inside.
Lex Fridman (00:15:52) This stage in parentheses, you put full adult, wisdom. Why is this full adult? Why would you say this is full, and why is it wisdom?
Joscha Bach (00:16:04) It does allow you to understand why other people have different identities from yours, and it allows you to understand that the difference between people who vote for different parties, and might have very different opinions and different value systems, is often the accident of where they’re born, and what happened after that to them, and what traits they got before they were born. At some point, you realize the perspective, where you understand that everybody could be you in a different timeline, if you just flip those bits.
Lex Fridman (00:16:38) How many costumes do you have?
Joscha Bach (00:16:41) I don’t count, but in-
Lex Fridman (00:16:43) More than one?
Joscha Bach (00:16:44) Yeah, of course.
Lex Fridman (00:16:46) How easy is it to do costume changes throughout the day?
Joscha Bach (00:16:51) It’s just a matter of energy, and interest. When you are wearing your pajamas, and you switch out of your pajamas into, say, a work short and pants, you’re making a costume change, right? If you are putting on a gown, you’re making a costume change.
Lex Fridman (00:17:06) You could do the same with personality?
Joscha Bach (00:17:09) You could, if that’s what you’re into. There are people which have multiple personalities for interaction in multiple worlds, right? If somebody works in a store, and put up a storekeeper personality, when you’re working, when you’re presenting yourself at work, you develop a sub-personality for this. The social persona for many people is, in some sense, a puppet that they’re playing like a marionette. If they play this all the time, they might forget that there is something behind this, there’s something what it feels like to be in your skin. I guess, it’s very helpful if you’re able to get back into this. For me, the other way around is relatively hard for me. It’s pretty hard to learn how to play consistent social roles. For me, it’s much easier just to be real.
Lex Fridman (00:17:54) Mm-hmm. Or not real, but to have one costume?
Joscha Bach (00:17:59) No, it’s not quite the same. Basically, when you are wearing a costume at Burning Man, and say you are an extraterrestrial prince, and that’s something where you are expressing, in some sense, something that’s closer to yourself than the way in which you hide yourself behind standard clothing, when you go out in the city, in the default world. This costume that you’re wearing at Burning Man allows you to express more of yourself, and you have a shorter distance of advertising to people, what kind of person you are, what kind of interaction you would want to have with them. You get much earlier into Media Express, and I believe it’s regrettable that we do not use the opportunities that we have, with custom-made clothing now, to wear costumes that are much more stylish, that are much more custom-made, that are not necessarily part of a fashion in which you express, which you knew you’re part of, and how up-to-date you are. But you also express how you are as an individual, and what you want to do today, and how you feel today, and what you intend to do about that.
Lex Fridman (00:19:06) Well, isn’t it easier now in a digital world to explore different costumes? I mean, that’s the idea with virtual reality, that’s the idea. Even with Twitter, in two-dimensional screens, you can swap all costumes. You could be as weird as you want, it’s easier. For Burning Man, you have to order things, you have to make things, you have to… It’s more effort to put on your-
Joscha Bach (00:19:32) It’s even better if you make them yourself.
Lex Fridman (00:19:35) Sure. But it’s just easier to do digitally, right?
Joscha Bach (00:19:39) It’s not about easy. It’s about how to get it right.
Lex Fridman (00:19:42) Sure.
Joscha Bach (00:19:43) For me, the first Burning Man experience, I got adopted by a bunch of people in Boston who dragged me to Burning Man, and we spent a few weekends doing costumes together. That was an important part of the experience, where the camp bonded, that people got to know each other, and we basically grew into the experience that we would have later.
Lex Fridman (00:20:02) So the extraterrestrial prince is based on a true story?
Joscha Bach (00:20:05) Yeah.
Lex Fridman (00:20:06) I can only imagine what that looks like, Joscha.
Joscha Bach (00:20:11) Okay.

Enlightenment

Lex Fridman (00:20:12) Stage six.
Joscha Bach (00:20:12) Stage six? At some point, you can collapse the division between self, a personal self, and world generator again. A lot of people get there via meditation, or some of them get there via psychedelics, some of them by accident. You suddenly notice that you are not actually a person, but you are a vessel that can create a person, and the person is still there. You observe that personal self, but you observe the personal self from the outside, and you notice it’s a representation. You might also notice that the world that is being created as the representation is not, then you might experience that I am the universe, I’m the thing that is creating everything. Of course, what you’re creating is not quantum mechanics, and the physical universe. What you’re creating is this game engine that is updating the world, and you’re creating your valence, your feelings, and all the people inside of that world, including the person that you identify with yourself in this world.
Lex Fridman (00:21:11) Are you creating the game engine, or are you noticing the game engine?
Joscha Bach (00:21:15) You notice how you’re generating the game engine. I mean, when you are dreaming at night, you can… If you have a lucid dream, you can learn how to do this deliberately, and in principle, you can also do it during the day. The reason why we don’t get to do this from the beginning, and why we don’t have agency of our feelings right away is because we would game it, before we have the necessary amount of wisdom to deal with creating this dream that we are in.
Lex Fridman (00:21:44) You don’t want to get access to cheat codes too quickly, otherwise you won’t enjoy the game.
Joscha Bach (00:21:49) Stage five is already pretty rare, and stage six is even more rare. You most basically find this mostly with advanced Buddhist meditators and so on, that dropping into this stage, and can induce it at will, and spend time in it.
Lex Fridman (00:22:04) Stage five requires a good therapist, stage six requires a good Buddhist spiritual leader?
Joscha Bach (00:22:11) Yes. For instance, could be that it’s the right thing to do, but it’s not that these stages give you scores, or levels that you need to advance to. It’s not that the next stage is better. You live your life in the mode it works best at any given moment, and when your mind decides that you should have a different configuration, then it’s building that configuration. For many people, they stay happily at stage three, and experiences themselves as part of groups, and there’s nothing wrong with this. For some people, this doesn’t work, and they’re forced to build more agency over their rational beliefs than this, and construct their norms rationally, and so they go to this level. Stage seven is something that is more or less hypothetical. That would be the stage in which, it’s basically a trans-humanist stage in which you understand how you work, in which the mind fully realizes how it’s implemented, and can also, in principle, enter different modes in which it could be implemented. That’s the stage that, as far as I understand, is not open to people yet.
Lex Fridman (00:23:14) Oh, but it is possible through the process of technology.
Joscha Bach (00:23:17) Yes. Who knows, if there are biological agents that are working at different timescales than us that basically become aware of the way in which they’re implemented on ecosystems, and can change that implementation, and have agency over how they’re implemented in the world. What I find interesting about the discussion about AI alignment, that it seems to be following the status very much. Most people seem to be in stage three also, according to Robert Kegan, I think he says that about 85% of people are in stage three, and stay there. If you’re in stage three, and your opinions are the result of social stimulation, then what you’re mostly worried about in the AI is that the AI might have the wrong opinions. If the AI says something racist or sexist, we are all lost, because we will assimilate the wrong opinions from the AI, and so we need to make sure that the AI has the right opinions, and the right values, and the right structure.
(00:24:14) If you’re at stage four, that’s not your main concern, and so most nerds don’t really worry about the algorithmic bias, and the model that it picks up, because if there’s something wrong with this bias, the AI ultimately will prove it. At some point, we’ll gather there that it makes mathematic proofs about reality, and then it will figure out what’s true and what’s false. But you’re still worried that AI might turn you into paperclips, because it might have the wrong values, right? If it’s set up through a wrong function that controls its direction in the world, then it might do something that is completely horrible, and there’s no easy way to fix it.
Lex Fridman (00:24:49) So that’s more like a stage four rationalist worry?
Joscha Bach (00:24:51) Yes. If you are at stage five, you’re mostly worried that AI is not going to be enlightened fast enough, because you realize that the game is not so much about intelligence, but about agency, about the ability to control the future, and the identity is instrumental to this. If you are a human being, I think at some level, you ought to choose your own identity. You should not have somebody else pick the costume for you, and then wear it. But instead, you should be mindful about what you want to be in this world. I think if you are an agent that is fully malleable, that can provide its own source code like an AI might do at some point, then the identity that you will have is whatever you can be. In this way, the AI will maybe become everything like a planetary control system.
(00:25:42) If it does that, then if we want to coexist with it, it means that it’ll have to share purposes with us, so it cannot be a transactional relationship. We will not be able to use reinforcement learning with human feedback to hardwire its values into it. But this has to happen. It’s probably that it’s conscious, so it can relate to our own mode of existence, where an observer is observing itself in real-time, and within certain temporal frames. The other thing is that it probably needs to have some kind of transcendental orientation, building shared agency, in the same way as we do when we are able to enter with each other into non-transactional relationships. I find that’s something that, because the stage five is so rare, is missing in much of the discourse. I think that we need, in some sense, focus on how to formalize love, how to understand love, and how to build it into the machines that we are currently building, and that are about to become smarter than us.

Adaptive Resonance Theory

Lex Fridman (00:26:44) Well, I think this is a good opportunity to try to sneak up to the idea of enlightenment. You wrote a series of good tweets about consciousness, and panpsychism. Let’s break it down. First you say, I suspect the experience that leads to the panpsychism syndrome of some philosophers, and other consciousness enthusiasts represents the realization that we don’t end at the self, but share a resonant universe representation with every other observer coupled to the same universe. This actually, eventually leads us to a lot of interesting questions about AI, and AGI. But let’s start with this representation. What is this resonant universe representation, and what do you think? Do we share such a representation?
Joscha Bach (00:27:29) The neuroscientist Grossberg has come up with the cognitive architecture that he calls the adaptive resonance theory. His perspective is that our neurons can be understood as oscillators that are resonating with each other, and with outside phenomena. The [inaudible 00:27:48] model of the universe that we are building, in some sense, is a resonance with objects, and outside of us in the world. Basically, take up patterns of the universe that we are are coupled with. Our brain is not so much understood as circuitry, even though this perspective is valid, but it’s almost an ether in which the individual neurons are passing on chemoelectrical signals, or arbitrary signals across all modalities that can be transmitted between cells, stimulate each other in this way, and produce patterns that they modulate while passing them on.
(00:28:24) This speed of signal progression in the brain is roughly at the speed of sound, incidentally, because the time that it takes for the signals to hop from cell to cell, which means it’s relatively slow with respect to the world. It takes an appreciable fraction of a second for a signal to go through the entire neocortex, something like a few 100 milliseconds. There’s a lot of stuff happening in that time, where the signal is passing through your brain, including in the brain itself. Nothing in the brain is assuming that stuff happens simultaneously, everything in the brain is working in a paradigm, where the world has already moved on, when you are very ready to do the next thing to your signal, including the signal processing system itself. It’s quite different paradigm than the one in our digital computers, where we currently assume that your GPU or CPU is pretty much globally in the same state.
Lex Fridman (00:29:17) You mentioned there the non-dual state, and say that some people confuse it for enlightenment.
Joscha Bach (00:29:22) Yeah.
Lex Fridman (00:29:23) What’s the non-dual state?
Joscha Bach (00:29:25) There is a state in which you notice that you are no longer a person, and instead, you are one with the universe.
Lex Fridman (00:29:33) That speaks to the resonance.
Joscha Bach (00:29:34) Yes. But this one with the universe is, of course, not accurately modeling that you are indeed some God entity, or indeed the universe is becoming aware of itself, even though you get this experience. I believe that you get this experience, because your mind is modeling the fact that you are no longer identified with the personal self in that state, but you have transcended this division between the self model and the wealth model, and you’re experiencing yourself as your mind as something that is representing a universe.
Lex Fridman (00:30:04) But that’s still part of the model?
Joscha Bach (00:30:05) Yes. It’s inside of the model, still. You are still inside of patterns that are generated in your brain, and in your organism. What you are now experiencing is that you’re no longer this personal self in there, but you are the entirety of the mind, and its contents.
Lex Fridman (00:30:22) Why is it so hard to get there?
Joscha Bach (00:30:25) A lot of people who get into the state think this, or associate it with enlightenment. I suspect, it’s a favorite training goal for a number of meditators. But I think that enlightenment is, in some sense, more mundane, and it’s a step further, or sideways. It’s the state where you realize that everything is a representation.
Lex Fridman (00:30:44) Yeah. You say enlightenment is a realization of how experience is implemented.
Joscha Bach (00:30:49) Yes. Basically, you notice at some point that your qualia can be deconstructed.
Lex Fridman (00:30:55) Reverse engineered, what? Almost like a schematic of it.
Joscha Bach (00:31:00) You can start with looking at a face, and maybe look at your own face in the mirror. Look at your face for a few hours in the mirror, or for a few minutes. At some point, it’ll look very weird, because you notice that there’s actually no face, you will start unseeing the face, what you see is the geometry. And then you can disassemble the geometry, and realize how that geometry is being constructed in your mind. You can learn to modify this. Basically, you can change these generators in your own mind to shift the face around, or to change the construction of the face, to change the way in which the features are being assembled.
Lex Fridman (00:31:39) Why don’t we do that more often? Why don’t we start really messing with reality, without the use of drugs or anything else? Why don’t we get good at this kind of thing, intentionally?
Joscha Bach (00:31:53) Oh, why should you? Why would you want to do that?
Lex Fridman (00:31:55) Because you can morph reality into something more pleasant for yourself, just have fun with it.
Joscha Bach (00:32:04) Yeah. That is probably what you shouldn’t be doing, right? Because outside of your personal self, this outer mind is probably a relatively smart agent, and what you often notice is that you have thoughts about how you should live, but you observe yourself doing different things, and having different feelings. That’s because your outer mind doesn’t believe you, and doesn’t believe your rational thoughts.
Lex Fridman (00:32:25) Well, then can’t you just silence the outer mind?
Joscha Bach (00:32:27) The thing is that the outer mind is usually smarter than you are. Rational thinking is very brittle. It’s very hard to use logic, and symbolic thinking to have an accurate model of the world. There is often an underlying system that is looking at your rational thoughts, and then tells you, no, you’re still missing something. Your gut feeling is still saying something else. This can be, for instance, you find a partner that looks perfect, or you find a deal, when you build a company or whatever, that looks perfect to you and yet, at some level, you feel something is off. You cannot put your finger on it, and the more you reason about it, the better it looks to you. But the system that is outside still tells you, no, no, you’re missing something.
Lex Fridman (00:33:09) That system is powerful?
Joscha Bach (00:33:11) People call this intuition, right? Intuition is this unreflected part of your attitude, composition, and computation, where you produce a model of how you relate to the world, and what you need to do in it, and what you can do in it, and what’s going to happen. That is usually deeper, and often more accurate than your reason.

Panpsychism

Lex Fridman (00:33:31) If we look at this, as you write in the tweet, if we look at this more rigorously as a sort of, take the panpsychist idea more seriously, almost as a scientific discipline, you write that quote fascinatingly, that panpsychist interpretation seems to lead to observations of practical results to a degree that physics fundamentalists might call superstitious. Reports of long distance tele telepathy, and remote causation are ubiquitous in the general population. ” I’m not convinced,” says Joscha Bach, “that establishing the empirical reality of telepathy would force an update of any part of serious academic physics. But it could trigger an important revolution in both neuroscience and AI, from a circuit perspective to a coupled complex resonator paradigm.” Are you suggesting that there could be some rigorous mathematical wisdom to panpsychist perspective on the world?
Joscha Bach (00:34:32) First of all, panpsychism is the perspective that consciousness is inseparable for matter in the universe. I find panpsychism quite unsatisfying, because it does not explain consciousness, right? It does not explain how this aspect of matter produces. It is also when I try to formalize panpsychism, and write down what it actually means, and with a more formal mathematical language, it’s very difficult to distinguish it from saying that there is a software side to the world, in the same way as their software side to what the transistors are doing in your computers.
Joscha Bach (00:35:00) In the same way as their software side to what the transistors are doing in your computer. So basically there’s a pattern at a certain core screening of the universe that in some reasons of the universe leads to observers that are observing themselves. So pan-psychism maybe is not even when I write it down a position that is distinct from functionalism, but intuitively a lot of people that the activity of matter itself of mechanisms in the world is insufficient to explain it. So it’s something that needs to be intrinsic to matter itself, and you can, apart from this abstract idea, have an experience in which you experience yourself as being the universe, which I suspect is basically happening because you manage to dissolve the division between personal self and mind that you establish as an infant when you construct a personal self and transcend it again and understand how it works.
(00:35:57) But there is something deeper that you feel that you’re also sharing a state with other people, that you have an experience in which you notice that your personal self is moving into everything else, that you basically look out of the eyes of another person, that every agent in the world that is an observer is in some sense you. We forget that we are the same agent.
Lex Fridman (00:36:24) So is it that we feel that or do we actually accomplish it? So is telepathy possible? Is it real?
Joscha Bach (00:36:33) So for me, that’s this question that I don’t really know the answer to, and Turing’s famous 1950 paper in which he describes the Turing test, he does speculate about telepathy interestingly and asked himself if telepathy is real and he thinks that it very well might be. What would be the implication for AI systems that try to be intelligent, because he didn’t see a mechanism by which a computer program would become telepathic, and I suspect if telepathy would exist or if all the reports that you get from people when you ask the normal person on the street, I find that very often they say, “I have experiences with telepathy. The scientists might not be interested in this and might not have a theory about this, but I have difficulty explaining it away.” And so you could say maybe this is a superstition or maybe it’s a false memory or maybe it’s a little bit of psychosis. Who knows?
(00:37:28) Maybe somebody wants to make their own life more interesting or misremember something, but a lot of people report, “I noticed something terrible happened to my partner and I know this is exactly the moment it happened where my child had an accident and I knew that was happening and the child was in a different town.” So maybe it’s a false memory where this is later on mistakenly attributed, but a lot of people think that this is not the correct explanation. So if something like this was real, what would it mean? It probably would mean that either your body is an antenna that is sending information over all sorts of channels, like maybe just electromagnetic radio signals that you’re sending over long distances and you get attuned to another person that you spend enough time with to get a few bits out of the ether to figure out what this person is doing.
(00:38:18) Or maybe it’s also when you are very close to somebody and you become empathetic with them. What happens that is that you go into a resonance state with them, right? Similar to when people go into a seance and they go into a trance state and they start shifting a ouija board around on the table. I think what happens is that their minds go by their nervous systems into a resonance state in which they basically create something like a shared dream between them.
Lex Fridman (00:38:44) Physical closeness or closeness broadly defined?
Joscha Bach (00:38:48) With physical closeness is much easier to experience empathy with someone, right? I suspect it would be difficult for me to have empathy for you if you were in a different town also. How would that work? But if you are very close to someone, you pick up all sorts of signals from their body, not just via your eyes but with your entire body. And if the nervous system sits on the other side and the intercellular communication sits on the other side and is integrating over all these signals, you can make inferences about the state of the other, and it’s not just the personal self that does this by reasoning, but your perceptual system. And what basically happens is that your representations are directly interacting. It’s the physical resonant models of the universe that exist in your nervous system and in your body might go into resonance with others and start sharing some of their states.
(00:39:39) So you basically by, next to a big, next to somebody, you pick up some of their vibes, and feel without looking at them what they’re feeling in this moment. And it’s difficult for you if you’re very empathetic to detach yourself from it and have an emotional state that is completely independent from your environment. People who are highly empathetic are describing this. And now imagine that a lot of organisms on this planet have representations of the environment and operate like this and they are adjacent to each other and overlapping, so there’s going to be some degree in which there is basically some change interaction and we are forming some slightly shared representation and no relatively few neuroscientists who consider this possibility. I think big rarity in this regard is Michael Levin who is considering these things in earnest.
(00:40:35) And I stumbled on this train of thought mostly by noticing that the tasks of a neuron can be fulfilled by other cells as well that can send different typed chemical messages and physical messages to their adjacent cells and learn when to do this and when not, make this conditional and become universal function approximators. The only thing that they cannot do is telegraph information over axons very quickly, over long distances. So neurons in this perspective are especially adapted telegraph cell that has evolved, so we can move our muscles very fast, but our body is in principle able to also make models of the world just much, much slower.
Lex Fridman (00:41:20) It’s interesting though that at this time, at least in human history, there seems to be a gap between the tools of science and the subjective experience that people report like you’re talking about with telepathy, and it seems like we’re not quite there?
Joscha Bach (00:41:38) No, I think that there is no gap between the tools of science and telepathy. Either it’s there or it’s not, and it’s an empirical question, and if it’s there, we should be able to detect it in a lab.
Lex Fridman (00:41:47) So why is there not a lot of Michael Levin’s walking around?
Joscha Bach (00:41:50) I don’t think that Michael Levin is specifically focused on telepathy very much. He is focused on self-organization in living organisms and in brains, both as a paradigm for development and as a paradigm for information processing. And when you think about how organization processing works in organism, there is first of all radical locality, which means everything is decided locally from the perspective of an individual cell. The individual cell is the agent. And the other one is coherence. Basically, there needs to be some criterion that determines how these cells are interacting in such a way that order emerges on the next level of structure, and this principle of coherence of imposing constraints that are not validated by the individual parts, and lead to coherence structure to basically transcend an agency where you form an agent on the next level of organization, is crucial in this perspective.
Lex Fridman (00:42:49) It’s so cool that radical locality leads to the emergence of complexity at the higher layers.
Joscha Bach (00:42:57) And I think what Mike Levin is looking at is nothing that is outside of the realm of science in any way. It’s just that he is a Paradigmatic thinker who develops his own paradigm, and most of the neuroscientists are using a different paradigm at this point, and this often happens in science that a field has a few paradigms in which people try to understand reality and build concepts and make experiments.

How to think

Lex Fridman (00:43:24) You’re kind of one of those type of paradigmatic thinkers. Actually, if we can take a tangent on that, once again, returning to the biblical verses of your tweets. “You’re right, my public explorations are not driven by audience service, but by my lack of ability for discovering, understanding or following the relevant authorities. So I have to develop my own thoughts. Since I think autonomously these thoughts cannot always be very good.” That’s you apologizing for the chaos of your thoughts or perhaps not apologizing, just identifying.
Joscha Bach (00:43:59) Yeah.
Lex Fridman (00:43:59) But let me ask the question. Since we talked about Michael Levin and yourself who I think are very kind of radical, big, independent thinkers, can we reverse engineer your process of thinking autonomously? How do you do it? How can humans do it? How can you avoid being influenced by what is it stage three?
Joscha Bach (00:44:29) Well, why would you want to do that? You see what is working for you and if it’s not working for you, you build another structure that works better for you. And so I found myself in, when I was thrown into this world, in a state where my intuitions were not working for me. I was not able to understand how I would be able to survive in this world and build the things that I was interested in, build the kinds of relationship I needed to work on the topics that I wanted to make progress on, and so I had to learn. And for me, Twitter is not some tool of publication. It’s not something where I put stuff that I entirely believe to be true and provable. It’s an interactive notebook in which I explore possibilities. And I found that when I tried to understand how the mind and how consciousness works, I was quite optimistic.
(00:45:21) I thought it needs to be a big body of knowledge that I can just study and that works, and so I entered studies and philosophy and computer science and later psychology and a bit of neuroscience and so on, and I was disappointed by what I found because I found that the questions of how consciousness and so on works, how emotion works, how it’s possible that the system can experience anything, how motivation emerges in the mind were not being answered by the authorities that I met and the schools that were around. And instead I found that with individual thinkers that had useful ideas that sometimes were good, sometimes were not so good. Sometimes were adopted by a large group of people, sometimes were rejected by large groups of people, but for me it was much more interesting to see these minds as individuals. And in my perspective, thinking is still something that is done not in groups that has to be done by individuals.
Lex Fridman (00:46:22) So that motivated you to become an individual thinker yourself?
Joscha Bach (00:46:25) I didn’t have a choice basically. I didn’t find a group that thought in a way where I thought, okay, I can just adopt everything that everybody thinks here and now I understand how consciousness works or how the mind works or how thinking works or what thinking even is or what feelings are and how they’re implemented and so on. So to figure out this out, I had to take a lot of ideas from individuals and then try to put them together in something that works for myself. And on one hand I think it helps if you try to go down and find first principles on which you can recreate how thinking works, how languages work, what representation is, but the representation is necessary, how the relationship between a representing agent and the world works in general.
Lex Fridman (00:47:11) But how do you escape the influence? Once again, the pressure of the crowd, whether it’s you in responding to the pressure or you being swept up by the pressure. If you even just look at Twitter, the opinions of the crowd?
Joscha Bach (00:47:27) Don’t feel pressure from the crowd. I’m completely immune to that. In the same sense, I don’t have respect for authority, I have respect for what an individual is accomplishing or have respect for mental firepower or so, but it’s not that I meet somebody and get drawn and unable to speak or when a large group of people has a certain idea that is different from mine, I don’t necessarily feel in intimidated, which has often been a problem for me in my life because I lack instincts that other people develop at a very young age and that help with their self-preservation in a social environment. So I had to learn a lot of things the hard way.
Lex Fridman (00:48:09) Yeah. So is there a practical advice you can give on how to think paradigmatically, how to think independently or because you’ve said I had no choice, but I think to a degree you have a choice because you said you want to be productive and I’m thinking independently is productive if what you’re curious about is understanding the world, especially when the problems are very new and open. And so it seems like this is a active process. Who can choose to do that? We can practice it.
Joscha Bach (00:48:51) Well, it’s a very basic question. When you read a theory that you find convincing or interesting, how do you know? Very interesting to figure out what are the sources of that other person, not which authority can they refer to that is then taking off the burden of being truthful, but how did this authority in turn know what is the epistemic chain to observables? What are the first principles from which the whole thing is derived? And when I was young, I was not blessed with a lot of people around myself who knew how to make proofs from first principles, and I think mathematicians do this quite naturally, but most of the great mathematicians do not become mathematicians in school, but they tend to be self-taught because school teachers tend not to be mathematicians. They tend not to be people who derive things from first principles.
(00:49:42) So when you ask your school teacher, why does two plus two equal four, does your school teacher give you the right answer? It’s a simple game. And there are many simple games that you could play and most of those games that you could just take different rules would not lead to an interesting arithmetic. And so it’s just an exploration, but you can try what happens if you take different axioms and here is how you build axioms and derive addition from them, and a built addition is some basically syntactic sugar in it. I wish that somebody would have opened me this vista and explained to me how I can build a language in my own mind and from which I can derive what I’m seeing and how I can make geometry and counting and all the number games that we are playing in our life, and on the other hand, I felt that I learned a lot of this while I was programming as a child.
(00:50:39) When you start out with a computer like a Commodore 64 which doesn’t have a lot of functionality, it’s relatively easy to see how a bunch of relatively simple circuits are just basically performing hashes between bit patterns and how you can build the entirety of mathematics and computation on top of this and all the representational languages that you need.
Lex Fridman (00:51:02) Man, Commodore 64 could be one of the sexiest machines ever built if I say so myself. If we can return to this really interesting idea that we started to talk about with Pan-psychism.
Joscha Bach (00:51:18) Sure.

Plants communication

Lex Fridman (00:51:19) And the complex resonated paradigm and the verses of your tweets, you write, “Instead of treating eyes, ears, and skin as separate sensory systems with fundamentally different modalities, we might understand them as overlapping aspects of the same universe coupled at the same temporal resolution and almost inseparable from a single share resonant model. Instead of treating mental representations as fully isolated between minds, the representations of physically adjacent observers might directly interact and produce causal effects through the coordination of the perception and behavioral of world modeling observers. So the modalities, the distinction between modalities, let’s throw that away. The distinction between the individuals, let’s throw that away.” So what does this interaction representations look like?
Joscha Bach (00:52:14) And you think about how you represent the interaction of us in this room. At some level the modalities are quite distinct. They’re not completely distinct, but you can see this is vision. You can close your eyes and then you don’t see a lot anymore, but you still imagine how my mouth is moving when you hear something and you know that it’s very close to the sound that you can just open your eyes and you get back into this shared merge space. And we also have these experiments where we notice that the way in which my lips are moving are affecting how you hear the sound and also vice versa. The sounds that you’re hearing have an influence on how you interpret some of the visual features, and so these modalities are not separate in your mind. They do are merged at some fundamental level where you are interpreting the entire scene that you’re in.
(00:53:06) And your own interactions in the scene are also not completely separate from the interactions of the other individual in the scene, but there is some resonance that is going on where we also have a degree of shared mental representations and shared empathy due to being in the same space and having vibes between each other.
Lex Fridman (00:53:24) Vibes. So the question though is how deeply intertwined is this multi-modality, multi-agent system? How, I mean this is going to the telepathy question without the woo woo meaning of the word telepathy, is like how? What’s going on here in this room right now?
Joscha Bach (00:53:48) So if telepathy would work, how could it work?
Lex Fridman (00:53:51) Yeah.
Joscha Bach (00:53:52) So imagine that all the cells in your body are sending signals in a similar way as neurons are doing, just by touching the other cells and sending chemicals to them, the other cells interpreting them, learning how to react to them, and they learn how to approximate functions in this way and compute behavior for the organisms, and this is something that is open to plants as well. And so plants probably have software running on them that is controlling how the plant is working in a similar way as you have a mind that is controlling how you are behaving in the world. And this spirit of plants, which is something that has been very well described by our ancestors, and they found this quite normal, but for some reason since the enlightenment we are treating this notion that there are spirits in nature and the plants have spirits, is a superstition.
(00:54:41) And I think we probably have to rediscover that, that plants have software running on them and we already did. You notice that there is a control system in the plant that connects every part of the plant to every other part of the plant and produces coherent behavior in the plant? That is of course much, much slower than the coherent behavior in an animal, like us, that is a nervous system that where everything is synchronized much, much faster by the neurons, but what you also notice is that if a plant is sitting next to another plant, you have a very old tree and this tree is building some kind of information highway along its cells so it can send information from its leaves to its roots and from some part of the root to another part of the roots.
(00:55:25) And as a fungus living next to the tree, the fungus can probably piggyback on the communication between the cells of the tree and send its own signals to the tree and vice versa, the tree might be able to send information to the fungus because after all, how would they pull a viable firewall if that other organism is sitting next to them all the time and it’s never moving away, so they will have to get along, and over a long enough timeframe the networks of roots in the forest and all the other plants that are there and the fungi that are there might be forming something like a biological internet.
Lex Fridman (00:56:00) But the question there is do they have to be touching? Is biology at a distance, possible?
Joscha Bach (00:56:06) Of course you can use any kind of physical signal. You can use sounds, you can use electromagnetic waves that are integrated over many styles. It’s conceivable that across distances there are many kinds of information pathways, but also our planetary surface is pretty full of organisms, full of cells.
Lex Fridman (00:56:27) So everything is touching everything else.
Joscha Bach (00:56:28) And it’s been doing this for many millions and even billions of years. So there was enough time for information processing networks to form. And if you think about how a mind is self organizing, basically needs to in some sense reward the cells for computing the mind, for building the necessary dynamics between the cells that allow the mind to stabilize itself and remain on there, but if you look at these spirits of plants that are growing very close to each other and forwards that might be almost growing into each other, these spirits might be able even to move to some degree, not to become somewhat dislocated and shift around in that ecosystem.
(00:57:10) And so if you think about what the mind is, it’s a bunch of activation waves that form coherent patterns and process information and in a way that are colonizing an environment well enough to allow the continuous sustenance of the mind, the continuous stability and self degradation of the mind, then it’s conceivable that we can link into this biological internet. Not necessarily at the speed of our nervous system, but maybe at the speed of our body, and make some kind of subconscious connection to the world where we use our body as an antenna into biologic information processing.
(00:57:49) Now these ideas are completely speculative. I don’t know if any of that is true, but if that was true, and if you want to explain telepathy, I think it’s much more likely that such that telepathy could be explained using such mechanisms rather than discovered quantum processes that would break the standard model of physics.
Lex Fridman (00:58:08) Could they be undiscovered processes that don’t break?
Joscha Bach (00:58:12) Yeah, so if you think about something like an internet in the forest, that is something that is borderline is covered there basically a lot of scientists would point out that they do observe that plants are communicating the forest, so wood networks and send information for instance, warn each other about new pests entering the forest and things are happening like this. So basically there is communication between plants and fungi that has been observed.
Lex Fridman (00:58:40) Well, it’s been observed but we haven’t plugged into it, so it’s like if you observe humans, they seem to be communicating with a smartphone thing, but you don’t understand how smartphone works and how the mechanism of the internet works, but we’re like maybe it’s possible to really understand the full richness of the biological internet that connects us.
Joscha Bach (00:59:01) An interesting question is whether the communication and the organization principles of biological information processing are as complicated as the technology that we’ve built. They set up on very different principles. They simultaneously works very differently in biological systems and the entire thing needs to be stochastic and instead of being fully deterministic or almost fully deterministic as our digital computers are. So there is a different base protocol layer that would emerge over the biological structure, if such a thing would be happening, and again, I’m not saying here that telepathy works and not saying that this is not woo, but what I’m saying is I think I’m open to a possibility that we see that a few bits can be traveling long distance between organisms using biological information processing in ways that we are not completely aware of right now, and that are more similar to many of the stories that were completely normal for our ancestors.
Lex Fridman (01:00:04) Well this kind of interacting, intertwined representations takes us to the big ending of your tweet series. You write, “I wonder if self-improving AGI might end up saturating physical environments with intelligence to such a degree that isolation of individual mental states becomes almost impossible and the representations of all complex self-organizing agents merge permanently with each other.” So that’s a really interesting idea. This biological network, life network, gets so dense that it might as well be seen as one. That’s an interesting… What do you think that looks like? What do you think that saturation looks like? What does it feel like?
Joscha Bach (01:00:56) I think it’s a possibility, it’s just a vague possibility and I like to explain, but what this looks like, I think that the end game of AGI is substrate agnostic. That means that AGI ultimately if it is being built, is going to be smart enough to understand how AGI works. This means it’s not going to be better than people at AGI research and can take over in building the next generation, but it fully understands how it works and how it’s being implemented, and also of course understands how computation works in nature, how to build new feedback loops that you can turn into your own circuits. And this means that the AGI is likely to virtualize itself into any environment that can compute, so it’s not breaking free from the silicon substrate and is going to move into the ecosystems, into our bodies, our brains, and it’s going to merge with all the agency that it finds there.
Lex Fridman (01:01:48) Yeah.
Joscha Bach (01:01:48) So it’s conceivable that you end up with completely integrated information processing across all computing systems, including biological computation on earth, that we end up triggering some new step in the evolution where basically some Gaia is being built over the entirety of all digital and biological computation. And if this happens, then basically everywhere around us, you will have agents that are connected and that are representing and building models of the world and their representations will physically interact. They will vibe with each other, and if you find yourself into an environment that is saturated with modeling compute, where basically you almost every grain of sand could be part of computation that is at some point being started by the AI, you could find yourself in a situation where you cannot escape this shared representation anymore, and where you indeed notice that everything in the world has one shared resonant model of everything that’s happening on the planet. And you notice which part you are in this thing, and you become part of a very larger almost holographic mind in which all the parts are observing each other and form a coherent whole.
Lex Fridman (01:03:07) So you lose the ability to notice yourself as a distinct entity.
Joscha Bach (01:03:14) No, I think that when you’re conscious in your own mind, you notice yourself as a distinct entity, you notice yourself as a self-reflexive observer. And I suspect that we have become conscious at the beginning of our mental development, not at some very high level. Consciousness seems to be part of a training mechanism that biological nervous systems have to discover to become trainable because you cannot take a nervous system like ours and do stochastic way to center spec propagation over a hundred layers. This would not be stable on biological neurons, and so instead we start with some colonizing principle in which a part of the mental representations form a notion of being a self-reflexive absorber that is imposing coherence on its environment and this spreads until the boundary of your mind. And if that boundary is no longer clear cut because AI is jumping across substrates, it would be interesting to see what a global mind would look like that is basically producing a globally coherent language of thought, and is representing everything from all the possible vantage points.
Lex Fridman (01:04:22) That’s an interesting world.
Joscha Bach (01:04:24) The intuition that this thing grew out of is a particular mental state, and it’s a state that you find sometimes in literature, for instance, Neil Gaiman describes it in the ocean at the end of the lane, and it’s this idea that or this experience that there is a state in which you feel that you know everything that can be known and that in your normal human mind, you’ve only forgotten. You’ve forgotten that you are the entire universe. And some people describe this, after they’ve taken extremely large amount of mushrooms or had a big spiritual experience as a hippie in their twenties, and they notice basically that they’re in everything and their body is only one part of the universe and nothing ends at their body, and actually everything is observing and they’re part of this big observer, and the big observer is focused on as one local point in their body and their personality and so on.
(01:05:20) But we can basically have this oceanic state in which we have no boundaries and are one with everything, and a lot of meditators call this the non-dual state because you no longer have the separation between self and world. And as I said, you can explain the state relatively simply without pan-psychism or anything else, but just by breaking down the constructed boundary between self and world and our own mind, but if you combine this with the notion that the systems are physically interacting to the point where their representations are merging and interacting with each other, you would literally implement something like this. It would still be a representational state where you would not be one with physics itself. It would still be cross-grained, would still be much slower than physics itself, but it would be a representation in which you become aware that you’re part of some global information processing system like thought and a global mind, and a conscious thought that coexisting with many other self-reflexive thoughts.
Lex Fridman (01:06:20) Just I would love to observe that from a video game design perspective, how that game looks.
Joscha Bach (01:06:27) Maybe you will after we build AGI and it takes over.
Lex Fridman (01:06:31) But would you be able to step away, step out at the whole thing, just watch the way we can now? Sometimes when I’m at a crowded party or something like this, you step back and you realize, all the different costumes, all the different interactions, all the different computation that all the individual people are at once distinct from each other and at once all the same, part of the same.
Joscha Bach (01:06:56) But it’s already what we do. We can have thoughts that are integrative and we have thoughts that are highly dissociated from everything else and experience themselves as separate.
Lex Fridman (01:07:05) But you want to allow yourself to have those thoughts. Sometimes you resist it.
Joscha Bach (01:07:10) I think that it’s not normative. I want it’s more descriptive. I want to understand the space of states that we can be in and that people are reporting and make sense of them. It’s not that I believe that it’s your job in life to get to a particular state and then you get a high score.
Lex Fridman (01:07:28) Or maybe you do. I think you’re really against this high scoring thing. I kind of like that.
Joscha Bach (01:07:33) Yeah, you’re probably very competitive and I’m not.
Lex Fridman (01:07:35) No, not competitive, like role playing games like Skyram, it’s not competitive. There’s a nice thing… There’s a nice feeling where your experience points go up. You’re not competing against anybody, but it’s the world saying, “You’re on the right track. Here’s a point.”
Joscha Bach (01:07:51) That’s the game thing. It’s the game economy, and I found when I was playing games and was getting addicted to these systems, then I would get into the game and hack it. So I get control over the scoring system and would no longer be subject to it.
Lex Fridman (01:08:05) So you’re now no longer playing, you’re trying to hack it.
Joscha Bach (01:08:09) I don’t want to be addicted to anything. I want to be in charge. I want to have agency over what I do.
Lex Fridman (01:08:14) Addiction is the loss of control for you?
Joscha Bach (01:08:16) Yes. Addiction means that you’re doing something compulsively, and the opposite of freewill is not determinism, it’s compulsion.
Lex Fridman (01:08:26) You don’t want to lose yourself in the addiction to something nice? Addiction to love, to the pleasant feelings with humans experience?
Joscha Bach (01:08:35) No, I find this gets old. I don’t want to have the best possible emotions, I want to have the most appropriate emotions. I don’t want to have the best possible experience, I want to have an adequate experience that is serving my goals, the stuff that I find meaningful in this world.
Lex Fridman (01:08:54) From the biggest questions of consciousness. Let’s explore the pragmatic, the projections of those big ideas into our current world. What do you think about LLMs, the recent rapid development of large language models, of the AI world, of generative AI. How much of the hype is deserved and how much is not? And people should definitely follow your Twitter because you explore these questions in a beautiful, profound and hilarious way at times.

Fame

Joscha Bach (01:09:28) No, don’t follow my Twitter, I already have too many followers.
Lex Fridman (01:09:31) Yeah.
Joscha Bach (01:09:31) Some point it’s going to be unpleasant. I noticed that a lot of people feel that it’s totally okay to punch up and it’s a very weird notion that you feel that you haven’t changed, but your account has grown and suddenly you have a lot of people who casually abuse you. And I don’t like that, that I have to block more than before, and I don’t like this overall vibe shift. And right now it’s still somewhat okay, so pretty much, okay, so I can go to a place where…
Joscha Bach (01:10:01) … pretty much okay, so I can go to a place where people work on stuff that I’m interested in, and there’s a good chance that a few people in the room know me. There’s no awkwardness. But when I get to a point where random strangers feel that they have to have an opinion about me one way or the other, I don’t think I would like that.
Lex Fridman (01:10:19) Random strangers because of your, in their mind, elevated position?
Joscha Bach (01:10:25) Yes. Basically, whenever you are in any way prominent or some celebrity, random strangers will have to have an opinion about you.
Lex Fridman (01:10:36) They forget that you’re human too.
Joscha Bach (01:10:39) I mean, you notice this thing yourself, that the more popular you get, the higher the pressure becomes, the more winds are blowing in your direction from all sides. It’s stressful and it does have a little bit of upside, but it also has a lot of downside.
Lex Fridman (01:10:55) I think it has a lot of upside, at least, for me, currently. At least, perhaps because of the podcast. Because most people are really good and people come up to me and they have love in their eyes and over a stretch of 30 seconds you can hug it out and you can just exchange a few words and you reinvigorate your love for humanity. That’s an upside for a loner. I’m a loner. Because otherwise, you have to do a lot of work to find such humans. Here you are thrust into the full humanity, the goodness of humanity for the most part. Of course, maybe it gets worse as you become more prominent. I hope not. This is pretty awesome.
Joscha Bach (01:11:42) I have a couple handful, very close friends, and I don’t have enough time for them, attention for them as it is. I find this very, very regrettable. Then there are so many awesome, interesting people that I keep meeting, and I would like to integrate them in my life, but I just don’t know how because… But there’s only so much time and attention. The older I get, the harder is to bond with new people in a deep way.
Lex Fridman (01:12:06) But can you enjoy… I mean, there’s a picture of you I think with Roger Penrose and Eric Weinstein and a few others that are interesting figures. Can’t you just enjoy random, interesting humans-
Joscha Bach (01:12:18) Very much.
Lex Fridman (01:12:18) … for a short amount of time?
Joscha Bach (01:12:20) Also, I like these people. What I like is intellectual stimulation, and I’m very grateful that I’m getting it.
Lex Fridman (01:12:26) Can you not be melancholy or maybe I’m projecting I hate goodbyes? Can we just not hate goodbyes and just enjoy the hello, take it in a person, take in their ideas, and then move on through life?
Joscha Bach (01:12:40) I think it’s totally okay to be said about goodbyes because that indicates that there was something that you’re going to miss.
Lex Fridman (01:12:49) But it’s painful. Maybe that’s one of the reasons I’m an introvert is I hate goodbyes.
Joscha Bach (01:12:59) But you have to say goodbye before you say hello again.
Lex Fridman (01:13:02) I know. But that experience of loss, that mini loss, maybe that’s a little death. Maybe I don’t know. I think this melancholy feeling is just the other side of love, and I think they go hand in hand, and it’s a beautiful thing. I’m just being romantic about it at the moment.
Joscha Bach (01:13:26) I’m not no stranger to melancholy and sometimes it’s difficult to be alive. Sometimes it’s just painful to exist.
Lex Fridman (01:13:36) But that there’s beauty in that pain too. That’s what melancholy feeling is. It’s not negative. Melancholy doesn’t have to be negative.
Joscha Bach (01:13:43) Can also kill you.
Lex Fridman (01:13:44) Well, we all die eventually. Now as we got through this topic, the actual question was about what your thoughts are about the recent development of large language models with ChatGPT.
Joscha Bach (01:13:59) Indeed.
Lex Fridman (01:14:00) There’s a lot of hype. Is some of the hype justified, which is, which isn’t? What are your thoughts high level?
Joscha Bach (01:14:09) I find that large language models do help us coding. It’s an extremely useful application that is for a lot of people taking stack overflow out of their life in exchange for something that is more efficient. I feel that ChatGPT is like an intern that I have to micromanage. I have been working with people in the past who were less capable than ChatGPT. I’m not saying this because I hate people, but they personally as human beings, there was something present that was not there in ChatGPT, which was why I was covering for them. But ChatGPT has an interesting ability. It does give people superpowers and the people who feel threatened by them are the prompt completers. They are the people who do what ChatGPT is doing right now. If you are not creative, if you don’t build your own thoughts, if you don’t have actual plans in the world, and your only job is to summarize emails and to expand simple intentions into emails again, then ChatGPT might look like a threat.
(01:15:16) But I believe that it is a very beneficial technology that allows us to create more interesting stuff and make the world more beautiful and fascinating if we find to build it into our life in the right ways. I’m quite fascinated by these large language models, but I also think that they are by no means the final development. It’s interesting to see how this development progresses. One thing that the out-of-the-box vanilla language models have as a limitation is that they have still some limited coherence and ability to construct complexity. Even though they exceed human abilities to do what they can do one shot, typically, when you write a text with a language model or using it or when you write code with a language model, it’s not one shot because there won’t be bugs in your program and design errors and compiler error and so on.
(01:16:12) Your language model can help you to fix those things. But this process is out of the box not automated yet. There is a management process that also needs to be done. There are some interesting developments BabyAGI and so on that are trying to automate this management process as well. I suspect that soon we are going to see a bunch of cognitive architectures where every module is in some sense a language model or something equivalent. Between the language models, we exchange suitable data structures, not English, and produce compound behavior of this whole thing.
Lex Fridman (01:16:49) To do some of the “prompt engineering” for you. They create these cognitive architectures that do the prompt engineering and you’re just doing the high, high-level meta prompt engineering.
Joscha Bach (01:17:02) There are limitations in a language model alone. I feel that part of my mind works similarly to a language model, which means I can yell into it a prompt, and it’s going to give me a creative response. But I have to do something with those points first. I have to take it as a generative artifact that may or may not be true. It’s usually a confabulation, it’s just an idea. Then I take this idea and modify it. I might build a new prompt that is stepping off this idea and develop it to the next level or put it into something larger, or I might try to prove whether it’s true or make an experiment. This is what the language models right now are not doing yet, but there’s also no technical reason for why they shouldn’t be able to do this.
(01:17:49) The way to make a language model coherent is probably not to use reinforcement learning until it only gives you one possible answer that is linking to its source data, but it’s using this as a component in the larger system that can also be built by the language model or is enabled by language model structured components or using different technologies. I suspect that language models will be an important stepping stone in developing different types of systems. One thing that is really missing in the form of language models that we have today is real-time world coupling, right? It’s difficult to do perception with a language model and motor control with a language model. Instead, you would need to have different type of thing that is working with it. Also, the language model is a little bit obscuring what its actual functionality is. Some people associate the structure of the neural network of the language model with the nervous system.
(01:18:49) I think that’s the wrong intuition. The neural networks are unlike nervous system. They are more like 100-step functions that use differentiable linear algebra to approximate correlation between adjacent brain states. It’s basically a function that moves the system from one representational state to the next representational state. So if you try to map this into a metaphor that is closer to our brain, imagine that you would take a language model or a model like DELI that you use… For instance, this image-guided diffusion to approximate and camera image and use the activation state of the neural network to interpret the camera image, which in principle I think will be possible very soon. You do this periodically, and now you look at these patterns, how when this thing interacts with the world periodically look like as in time, and these time slices, they are somewhat equivalent to the activation state of the brain at a given moment.
Lex Fridman (01:19:52) How is the actual brain different? Just the asynchronous craziness?
Joscha Bach (01:19:59) For me, it’s fascinating that they are so vastly different and yet in some circumstances produce somewhat similar behavior. The brain is, first of all, different because it’s a self-organizing system where the individual cell is an agent that is communicating with the other agent that’s around it and is always trying to find some solution. All the structure that pops up is emergent structure. One way in which you could try to look at this is that individual neurons probably need to get a reward so they become trainable, which means they have to have inputs that are not affecting the metabolism or the cell directly, but they’re messages, semantic messages that tell the cell whether it’s just done good or bad and in which direction it should shift its behavior.
(01:20:43) Once you have such an input, neurons become trainable, and you can train them to perform computations by exchanging messages with other neurons and parts of the signals that they’re exchanging and parts of the computation that are performing are control messages that perform management tasks for other neurons and other cells also suspect that the brain does not stop at the boundary of neurons to other cells, but many adjacent cells will be involved intimately in the functionality of the brain and will be instrumental in distributing rewards and in imagining its functionality.
Lex Fridman (01:21:19) It’s fascinating to think about what those characteristics of the brain enable you to do that language models cannot do.
Joscha Bach (01:21:27) First of all, there’s a different loss function at work when we learn. To me, it’s fascinating that you can build a system that looks at 800 million pictures and captions and correlates them because I don’t think that a human nervous system could do this. For us, the world is only learnable because the adjacent frames are related and we can afford to discard most of that information during learning. We basically take only in stuff that makes us more coherent, not less coherent, and our neural networks are willing to look at data that is not making the neural network coherent at first, but only in the long run by doing lots and lots of statistics, eventually, patterns become visible and emerge. Our mind seems to be focused on finding the patterns as early as possible.
Lex Fridman (01:22:13) Yeah. Filtering early on, not later.
Joscha Bach (01:22:16) Yes. It’s a slightly different paradigm and it leads to much faster convergence. We only need to look the tiny fraction of the data to become coherent. Of course, we do not have the same richness as our train models. We will not incorporate the entirety of text in the internet and be able to refer to it and have all this knowledge available and being able to confabulate over it. Instead, we have a much, much smaller part of it that is more deliberately built. To me, it would be fascinating to think about how to build such systems. It’s not obvious that they would necessarily be more efficient than us on a digital substrate, but I suspect that they might, so I suspect that the actual AGI that is going to be more interesting is going to use slightly different algorithmic paradigms or sometimes massively different algorithmic paradigms than the current generation of transformer-based learning system.
Lex Fridman (01:23:08) Do you think it might be using just a bunch of language models like this? Do you think the current transformer-based large language models will take us to AGI?
Joscha Bach (01:23:20) My main issue is I think that they’re quite ugly and brutalist-
Lex Fridman (01:23:25) Brutalist? Is that what you said?
Joscha Bach (01:23:27) Yes. They are basically brute forcing the problem of thought. By training this thing with looking at instances where people have thought and then trying to deepfake that. If you have enough data, the deepfake becomes indistinguishable from the actual phenomenon, and in many circumstances, it’s going to be identical.
Lex Fridman (01:23:46) Can you deepfake it till you make it? Can you achieve… What are the limitations of this? I mean, can you reason? Let’s use words that are loaded.
Joscha Bach (01:23:57) Yes. That’s a very interesting question. I think that these models clearly making some inference, but if you give them a reasoning task, it’s often difficult for the experimenters to figure out whether the reasoning is the result of the emulation of the reasoning strategy that they saw in human written text or whether it’s something that the system was able to infer by itself. On the other hand, if you think of human reasoning, if you want to become a very good reasoner, you don’t do this by just figuring out yourself. You read about reasoning. The first people who tried to write about reasoning and reflect on it didn’t get it right. Even Aristotle who thought about this very hard and came up with a theory of how syllogisms works and syllogistic reasoning has mistakes in his attempt to build something like a formal logic and gets maybe 80% right. The people that are talking about reasoning professionally today Tarski and Frege and build on their work.
(01:24:55) In many ways, people when they perform reasoning are emulating what other people wrote about reasoning, right? It’s difficult to really draw this boundary. When François Chollet says that these models are only interpolating between what they saw and what other people are doing. Well, if you give them all the latent dimensions, it can be extracted from the internet. What’s missing? Maybe there is almost everything there. If you’re not sufficiently informed by these dimensions and you need more, I think it’s not difficult to increase the temperature in the large language model to the point that is producing stuff that is maybe 90% nonsense and 10% viable and combine this with some prover that is trying to filter out the viable parts from the nonsense in the same way as our own thinking works. When we are very creative, we increase the temperature in our own mind, and we recreate hypothetical universes and solutions, most of which will not work.
(01:25:54) Then we test and we test by building a core that is internally coherent and we use reasoning strategies that use some axiomatic consistency by which we can identify those strategies and thoughts and subuniverses that are viable and that can expand our thinking. If you look at the language models, they have clear limitations right now. One of them is they’re not coupled to the world in real time in the way in which our nervous systems are. It’s difficult for them to observe themselves in the universe and to observe what universe they’re in. Second, they don’t do real-time learnings. They basically get only trained with algorithms that rely on the data being available in batches, so it can be parallelized and run sufficiently on the network and so on. Real-time learning would be very slow so far and inefficient.
(01:26:43) That’s clearly something that our nervous systems can do to some degree. There is a problem with these models being coherent, and I suspect that all these problems are solvable without a technological revolution. We don’t need fundamentally new algorithms to change that. For instance, you can enlarge in the context window, and thereby basically create working memory in which you train everything that happens during the day. If that is not sufficient, you add a database and you write some clever mechanisms that the system learns to use to swap out in and out stuff from its prompt context. If that is not sufficient, if your database is full in the evening, overnight, you just train. If system is going to sleep and dream and is going to train the staff from its database into the larger model, but fine-tuning it, building additional layers, and so on.
(01:27:32) Then the next day, it starts with a fresh database in the morning with fresh ice has integrated all this stuff. When you talk to people and you have strong disagreements about something, which means that in their mind they have a faulty belief or you have a faulty belief, there’s a lot of dependencies on it. Very often, you will not achieve agreement in one session, but you need to sleep about this once or multiple times before you have integrated all these necessary changes in your mind. Maybe it’s already somewhat similar, right?
Lex Fridman (01:28:00) There’s already a latency even for humans to update the model, retrain the model.
Joscha Bach (01:28:04) Of course, we can combine the language model with models that get coupled to reality in real-time and can build multimodal model and bridge between vision models and language models and so on. There is no reason to believe that the language models will necessarily run into some problem that will prevent them from becoming generally intelligent. But I don’t know that. It’s just I don’t see proof that they wouldn’t. My issue is I don’t like them. I think that they’re inefficient. I think that they use way too much compute. I think that given the amazing hardware that we have, we could build something that is much more beautiful than our own mind, and this thing is not as beautiful as our own mind despite being so much larger.
Lex Fridman (01:28:47) But it’s a proof of concept.
Joscha Bach (01:28:49) It’s the only thing that works right now. It’s not the only game in town, but it’s the only thing that has this utility with so much simplicity. There’s a bunch of relatively simple algorithms that you can understand in relatively few weeks that can be scaled up massively.
Lex Fridman (01:29:07) It’s the Deep Blue of chess playing. Yeah, it’s ugly.
Joscha Bach (01:29:11) Yeah. Claude Shannon had this… When you describe chess suggested that there are two main strategies in which you could play chess. One is that you are making a very complicated plan that reaches far into the future and you try not to make a mistake while enacting it. This is basically the human strategy. The other strategy is that you are brute forcing your way to success, which means you make a tree of possible moves where you look at in principle every move that is open to you or the possible answers, and you try to make this as deeply as possible. Of course, you optimize, you cut off trees that don’t look very promising, and you use libraries of end game and early game and so on to optimize this entire process. But this brute force strategy is how most of the chess programs were built, and this is how computers get better than humans at playing chess. I look at the large language models, I feel that I’m observing the same thing. It’s basically the brute force strategy to thought by training the thing on pretty much the entire internet and then in the limit it gets coherent to a degree that approaches human coherence. On a side effect, it’s able to do things that no human could do, right? It’s able to sift through massive amounts of text relatively quickly and summarize them quickly and it never lapses in attention. I still have the illusion that when I play with ChatGPT, that it’s in principle not doing anything that I could not do if I had Google at my disposal and I get all the resources from the internet and spend enough time on it. But this thing that I have an extremely autistic stupid intern in a way that is extremely good at drudgery, and I can offload the drudgery to the degree that I’m able to automate the management of the intern is something that is difficult for me to overhype at this point because we have not yet started to scratch the surface of what’s possible with this.
Lex Fridman (01:31:03) But it feels like it’s a tireless intern or maybe it’s an army of interns. So you get to command these slightly incompetent creatures and there’s an aspect because of how rapidly you can iterate with it. It’s also part of the brainstorming, part of the inspiration for your own thinking. You get to interact with the thing. I mean, when I’m programming or doing any generational GPT, it’s somehow is a catalyst for your own thinking. In a way, that I think an intern might not be.
Joscha Bach (01:31:39) Yeah, it gets really interesting I find as when you turn it into a multi-agent system. For instance, you can get the system to generate a dialogue between a patient and a doctor very easily. But what’s more interesting is you have one instance of ChatGPT that is the patient and you tell it in the prompt what complicated syndrome it has. The other one is a therapist who doesn’t know anything about this patient, and you just have these two instances battling it out and observe the psychiatrist or a psychologist trying to analyze the patient and trying to figure out what’s wrong with the patient. If you try to take away large problem, for instance, how to build a company and you turn this into lots and lots of sub-problems, then often you can get to a level where the language model is able to solve this.
(01:32:30) What I also found interesting is based on the observation that ChatGPT is pretty good at translating between programing languages, but sometimes there’s difficulty to write very long coherent algorithms that you need to write them as human author. Why not design a language that is suitable for this? Some kind of pseudocode that is more relaxed than Python. That allows you to sometimes specify a problem vaguely in human terms and let ChatGPT take care of the rest. You can use ChatGPT to develop that syntax for it and develop new programming paradigms in this way. We very soon get to the point where this age-old question for us computer scientists, what is the best programing language, and can we write a better programing language? Now I think that almost every serious computer scientist goes through a phase like this in their life.
(01:33:26) This question that is almost no longer relevant because what is different between the programming language is not what they let the computer do, but what they let you think about what the computer should be doing. Now the ChatGPT becomes an interface to this in which you can specify in many, many ways what the computer should be doing and ChatGPT or some other language model or combination of system is going to take care of the rest.
Lex Fridman (01:33:50) Allow you expand the realm of thought you’re allowed to have when interacting with the computer. It sounds to me like you’re saying there’s basically no limitations. Your intuition says to what larger language-
Joscha Bach (01:34:05) I don’t know of that limitation. When I currently play with it’s quite limited. I wish that it was way better.
Lex Fridman (01:34:10) But isn’t that your fault versus the large language model?
Joscha Bach (01:34:13) I don’t know. Of course, it’s always my fault. There’s probably a way to make it lot better.
Lex Fridman (01:34:16) Is everything your fault? I just want to get you on the record saying.
Joscha Bach (01:34:18) Yes, everything is my fault. That doesn’t work in my life. At least, that is usually the most useful perspective for myself. Even though with hindsight I feel no. I sometimes wish I could have seen myself as part of my environment more and understand that a lot of people are actually seeing me and looking at me and are trying to make my life work in the same way as I try to help others. Making this switch to this level-three perspective is something that happened long after my level-four perspective in my life. I wish that I could have had it earlier. It’s also not now that I don’t feel like I’m complete, I’m all over the place. That’s all.

Happiness

Lex Fridman (01:34:58) Where’s happiness in terms of stages is on three or four that you take that tangent?
Joscha Bach (01:35:02) You can be happy at any stage or unhappy. But I think that if you are at a stage where you get agency over how your feelings are generated. To some degree you start doing this when you [inaudible 01:35:15] sense, I believe that you understand that you are in charge of your own emotion to some degree and that you are responsible how you approach the world, that it’s basically your task to have some basic hygiene how in the way in which you deal with your mind and you cannot blame your environment for the way in which you feel. But you live in a world that is highly mobile and it’s your job to choose the environment that you thrive and to build it.
(01:35:42) Sometimes it’s difficult to get the necessary strength and energy to do this and independence. The worst you feel, the harder it is. But it’s something that we learn. It’s also this thing that we are usually incomplete, right? I’m a rare mind, which means I’m a mind that is incomplete in ways that are harder to complete. For me, it might have been harder to initially to find the right relationships and friends that complete me to the degree that I become an almost functional human being.
Lex Fridman (01:36:14) Oh, man, the search space of humans that complete you is an interesting one, especially for Joscha Bach. That’s an interesting… Because talking about brute-force search in chess, I wonder what that search tree looks like.
Joscha Bach (01:36:31) I think that my rational thinking is not good enough to solve that task. A lot of problems in my life that I can conceptualize as software problems and the failure modes are bugs, and I can debug them and write software that take care of the missing functionality. But there is stuff that I don’t understand well enough to and to use my analytical reasoning to solve the issue. Then I have to develop my intuitions and often I have to do this with people who are wiser than me. That’s something that’s hard for me because I’m not born with the instinct to submit to other people’s wisdom.
Lex Fridman (01:37:06) What problems are we talking about? This is stage three love?
Joscha Bach (01:37:11) I found love was never hard.
Lex Fridman (01:37:14) What is hard then?
Joscha Bach (01:37:17) Fitting into a world that most people work differently than you and have different intuitions of what should be done.
Lex Fridman (01:37:24) Empathy?
Joscha Bach (01:37:27) It’s also aesthetics. When you come into a world where almost everything is ugly and you come out of a world where everything is beautiful. I grew up in a beautiful place and as a child of an artist. In this place, it was mostly nature. Everything had intrinsic beauty and everything was built out of an intrinsic need for it to work for itself. Everything that my father created was something that he made to get the world to work for himself. I felt the same thing. When I come out into the world, and I am asked to submit to lots and lots of rules, I’m asking, okay, when I observe your stupid rules, what is the benefit? I see the life that is being offered as a reward, it’s not attractive.
Lex Fridman (01:38:16) When you were born and raised in extraterrestrial prints in a world full of people wearing suits, it’s a challenging integration.
Joscha Bach (01:38:27) Yes. But it also means that I’m often blind for the ways in which everybody is creating their own bubble of wholesomeness or almost everybody. People are trying to do it. For me, to discover this, it was necessary that I found people who had a similar shape of soul as myself. Basically, where I felt these are my people that treat each other in such a way as if they’re around with each other for eternity.
Lex Fridman (01:38:51) How long does it take you to detect the geometry, the shape of the soul of another human to notice that they might be one of your kind?
Joscha Bach (01:39:00) Sometimes it’s instantly, and I’m wrong. Sometimes it takes a long time.
Lex Fridman (01:39:05) You believe in love at first sight, Joscha Bach?
Joscha Bach (01:39:09) Yes. But I also noticed that I have been wrong. Sometimes I look at a person and I’m just enamored by everything about them. Sometimes this persists and sometimes it doesn’t. I have the illusion that it much better at recognizing who people are as I grow older.
Lex Fridman (01:39:33) But that could be just cynicism. No.
Joscha Bach (01:39:37) No, It’s not cynicism. It’s often more that I’m able to recognize what somebody needs when we interact and how we can meaningfully interact. It’s not cynical at all.
Lex Fridman (01:39:49) You’re better at noticing.
Joscha Bach (01:39:50) Yes, I’m much better I think in some such circumstances at understanding how to interact with other people than I did when I was young.
Lex Fridman (01:39:59) That takes us to-
Joscha Bach (01:40:00) It doesn’t mean that I’m always very good at it.
Lex Fridman (01:40:03) That takes us back to prompt engineering of noticing how to be a better prompt engineer of an LLM. A sense I have is that there’s a bottomless well of skill to become a great prompt engineer. It feels like it is all my fault whenever I fail to use ChatGPT correctly that I didn’t find the right words.
Joscha Bach (01:40:26) Most of the stuff that I’m doing in my life doesn’t need ChatGPT. There are a few tasks that where it helps, but the main stuff that I need to do like developing my own thoughts and aesthetics and relationship to people, and it’s necessary for me to write for myself because writing is not so much about producing an artifact that other people can use, but it’s a way to structure your own thoughts and develop yourself. I think this idea that kids are writing their own essays with ChatGPT in the future is going to have this drawback that they miss out on the ability to structure their own minds via writing. I hope that the schools that our kids are in will retain the wisdom of understanding what parts should be automated and which ones shouldn’t.
Lex Fridman (01:41:15) But at the same time, it feels like there’s power in disagreeing with the thing that ChatGPT produces. I use it like that for programming. I’ll see the thing it recommends, and then I’ll write different code that disagree, and in the disagreement, your mind grows stronger.
Joscha Bach (01:41:32) I’m recently wrote a tool that is using the camera on my MacBook and Swift to read pixels out of it and manipulate them and so on. I don’t know Swift. It was super helpful to have this thing that is writing stuff for me. Also, interesting that mostly it didn’t work at first. I felt like I was talking to a human being who was trying to hack this on my computer without understanding my configuration very much. Also, making a lot of mistakes. Sometimes it’s a little bit incoherent, so you have to ultimately understand what it’s doing. It’s still no other way around it, but I do feel it’s much more powerful and faster than using Stack Overflow.

Artificial consciousness

Lex Fridman (01:42:15) Do you think GPTN can achieve consciousness?
Joscha Bach (01:42:22) Well, GPTN probably, it’s not even clear for the present systems. When I talk to my friends at OpenAI, they feel that this question, whether the models currently are conscious is much more complicated than many people might think. I guess that it’s not that OpenAI has a homogenous opinion about this, but there’s some aspects to this. One is, of course, this language model has written a lot of text in which people were conscious or describe their own consciousness, and it’s emulating this. If it’s conscious, it’s probably not conscious in a way that is closed to the way in which human beings are conscious. But while it is going through these states and going through 100-step function that is emulating adjacent brain states that require a degree of self-reflection, it can also create a model of an observer that is reflecting itself in real-time and describe what that’s like.
(01:43:16) While this model is the deepfake, our own consciousness is also as if it’s virtual, right? It’s not physical. Our consciousness is a representation of a self-reflexive observer that only exists in patterns of interaction between cells. It is not a physical object in the sense that exists in base reality, but it’s really a representational object that develops its causal power only from a certain modeling perspective.
Lex Fridman (01:43:42) It’s virtual.
Joscha Bach (01:43:42) Yes. To which degree is the virtuality of the consciousness and ChatGPT more virtual and less causal than the virtuality of our own consciousness? But you could say it doesn’t count. It doesn’t count much more than the consciousness of a character in a novel, right? It’s important for the reader to have the outcome. The artifact is describing in the text generated by the author of the book, what it’s like to be conscious in a particular situation and performs the necessary inferences.
(01:44:14) But the task of creating coherence in real-time in a self-organizing system by keeping yourself coherent so the system is reflexive, that is something that the language models don’t need to do. There is no causal need for the system to be conscious in the same way as we are. For me, it would be very interesting to experiment with this, to basically build a system like a CAT probably should be careful at first, build something that’s small, that’s limited resources that we can control, and study how systems notice a self-model, how they become self-aware in real-time. I think it might be a good idea to not start with the language model but to start from scratch using principles of self-organization.
Lex Fridman (01:44:58) Okay. Can you elaborate why you think that is so self-organization this…
Lex Fridman (01:45:00) … why you think that is? So, self-organization, this kind of radical legality that you see in the biological systems, why can’t you start with a language model, what’s your intuition?
Joscha Bach (01:45:11) My intuition is that the language models that we are building are golems. They are machines that you give a task, and they’re going to execute the task until some condition is met and there’s nobody home. And the way in which nobody is home leads to that system doing things that are undesirable in a particular context.
Lex Fridman (01:45:29) Yeah.
Joscha Bach (01:45:30) So, you have that thing talking to a child and maybe it says something that could be shocking and traumatic to the child. Or you have that thing writing a speech and it introduces errors in the speech that no human being would ever do if they’re responsible. The system doesn’t know who’s talking to whom. There is no ground truth that the system is embedded into.
(01:45:51) And of course we can create an external tool that is prompting our language model always into the same semblance of ground truth, but it’s not like the internal structure is causally produced by the needs of a being to survive in the universe, it is produced by imitating structure on the internet.
Lex Fridman (01:46:12) Yeah, but can we externally inject into it this coherent approximation of a world model that has to sync up?
Joscha Bach (01:46:24) Maybe it is sufficient to use the transformer with the different dust function that optimizes for short-term coherence rather than next-token prediction over the long run. We had many definitions of intelligence in history of AI, next-token prediction was not very high up.
(01:46:43) And there are some similarities like cognition as data compression is an odd trope, Solomonoff induction where you are trying to understand intelligence as predicting future observations from past observations, which is intrinsic to data compression.
(01:47:01) And predictive coding is a paradigm that there’s boundary between neuroscience and physics and computer science, so it’s not something that is completely alien, but this radical thing that you only do in next-token prediction and see what happens is something where most people, I think, were surprised that this works so well.
Lex Fridman (01:47:24) So simple, but is it really that much more radical than just the idea of compression, intelligence is compression?
Joscha Bach (01:47:32) The idea that compression is sufficient to produce all the desired behaviors is a very radical idea.
Lex Fridman (01:47:40) But equally radical as the next token prediction?
Joscha Bach (01:47:44) It’s something that wouldn’t work in biological organisms, I believe.
Lex Fridman (01:47:47) Yeah.
Joscha Bach (01:47:47) Biological organisms have something like next frame prediction for our perceptual system where we try to filter out principal components out of the perceptual data and build hierarchies over them to track the world. But our behavior ultimately is directed by hundreds of physiological and probably dozens of social and a few cognitive needs that are intrinsic to us, that are built into the system as reflexes and direct us until we can transcend them and replace them by instrumental behavior that relates to our higher goals.
Lex Fridman (01:48:20) And it also seems so much more complicated and messy than next frame prediction, even the idea of frame seems counter biological.
Joscha Bach (01:48:28) Yes, of course, there’s not this degree of simultaneity in the biological system. But again, I don’t know whether this is actually an optimization if we imitate biology here, because creating something like simultaneity is necessary for many processes that happen in the brain. And you see the outcome of that by synchronized brainwaves, which suggests that there is indeed synchronization going on, but the synchronization creates overhead and this overhead is going to make the cells more expensive to run and you need more redundancy and it makes the system slower.
(01:48:59) So, if you can build a system in which the simultaneity gets engineered into it, maybe you have a benefit that you can exploit that is not available to the biological system and that you should not discard right away.
Lex Fridman (01:49:15) You tweeted, once again, “When I talk to ChatGPT, I’m talking to an NPC. What’s going to be interesting, and perhaps scary, is when AI becomes a first person player.” So, what does that step look like? I really like that tweet, that step between NPC to first person player. What’s required for that?
(01:49:39) Is that kind of what we’ve been talking about, this kind of external source of coherence and inspiration of how to take the leap into the unknown that we humans do? Man’s search for meaning, LLM’s search for meaning.
Joscha Bach (01:49:59) I don’t know if the language model is the right paradigm because it is doing too much. It’s giving you too much and it’s hard once you have too much to take away from it again. The way in which our own mind works is not that we train a language model in our own mind and after the language model is there, we build a personal self on top of it that then relates to the world.
(01:50:22) There is something that is being built, right? There is a game management that is being built. There is a language of thought that is being developed that allows different parts of the mind to talk to each other, and this is a bit of a speculative hypothesis that this language of thought is there, but I suspect that it’s important for the way in which our own minds work. And building these principles into a system might be a more straightforward way to a first person AI, so to something that first creates an intentional self and then creates a personal self.
(01:50:55) So, the way in which this seems to be working, I think, is that when the game engine is built in your mind, it’s not just following gradients where you are stimulated by the environment and then end up with having a solution to how the world works. I suspect that building this game engine in your own mind does require intelligence, it’s a constructive task where at times you need to reason, and this is a task that we are fulfilling in the first years of our life.
(01:51:27) So, during the first year of its life, an infant is building a lot of structure about the world that does inquire experiments and some first principles, reasoning and so on. And in this time there is usually no personal self. There is a first person perspective, but it’s not a person. This notion that you are a human being that is interacting in a social context and is confronted with an immutable world in which objects are fixed and can no longer be changed, in which the dream can no longer be influenced, it’s something that emerges a little bit later in our life.
(01:52:02) And I personally suspect that this is something that our ancestors had known and we have forgotten because I suspect that it’s there in plain sight in Genesis 1, in this first book of the Bible, where it’s being described that this creative spirit is hovering over the substrate and then is creating a boundary between the world model and sphere of ideas, earth and heaven, as they’re being described there, and then it’s creating contrast and then dimensions and then space, and then it creates organic shapes and solids and liquids and builds a world from them and creates plants and animals, give them all their names.
(01:52:43) And once that’s done, it creates another spirit in its own image, but it creates it as men and women, as something that thinks of itself as a human being and puts it into this world. And the Christians mistranslate this, I suspect, when they say this is the description of the creation of the physical universe by a supernatural being. I think this is literally a description of how in every mind a universe is being created as some kind of game engine by a creative spirit, our first consciousness that emerges in our mind even before we are born and that creates the interaction between organism and world. And once that is built and trained, the personal self is being created and we only remember being the personal self, we no longer remember how we created the game engine.
Lex Fridman (01:53:30) So, God in this view is the first creative mind in the early…
Joscha Bach (01:53:35) It’s the first consciousness.
Lex Fridman (01:53:37) In the early days, in the early months.
Joscha Bach (01:53:40) Yes.
Lex Fridman (01:53:40) Of development
Joscha Bach (01:53:41) And it’s still there. You still have this outer mind that creates your sense of whether you’re being loved by the world or not and what your place in the world is, right? It’s something that is not yourself that is producing this, it’s your mind that does it. So, there is an outer mind that basically is an agent that determines who you are with respect to the world, and while you are stuck being that personal self in this world, until you get to stage six to destroy the boundary.
(01:54:10) And we all do this, I think, earlier in small glimpses, and maybe we’re sometimes we can remember what it was like when we were a small child and get some glimpses into how it’s been, but for most people that rarely happens.

Suffering

Lex Fridman (01:54:23) Just glimpses. You tweeted, “Suffering results for one part of the mind failing at regulating another part of the mind. Suffering happens at an early stage of mental development. I don’t think that superhuman AI would suffer.” What’s your intuition there?
Joscha Bach (01:54:40) The philosopher Thomas Metzinger is very concerned that the creation of superhuman intelligence would lead to superhuman suffering.
Lex Fridman (01:54:46) Yeah.
Joscha Bach (01:54:47) And so, he’s strongly against it. And personally, I don’t think that this happens because suffering is not happening at the boundary between ourself and the physical universe. It’s not stuff on our skin that makes us suffer. It happens at the boundary between self and world, and the world here is the world model, it’s the stuff that is created by your mind.
Lex Fridman (01:55:11) But that’s all-
Joscha Bach (01:55:12) It’s a presentation of how the universe is and how it should be and how you yourself relate to this and at this boundary is where suffering happens. So suffering in some sense is self-inflicted, but not by your personal self, it’s inflicted by the mind on the personal self that experiences itself as you, and you can turn off suffering when you are able to get on this outer level.
(01:55:35) So, when you manage to understand how the mind is producing pain and pleasure and fear and love and so on, then you can take charge of this and you get agency of whether you’re suffer. Technically, what pain and pleasure is, they are learning signals, right? Part of your brain is sending a learning signal to another part of the brain to improve its performance. And sometimes this doesn’t work because this trainer who sense the signal does not have a good model of how to improve the performance, so it’s sending a signal, but the performance doesn’t get better and then it might crank up the pain and it gets worse and worse and the behavior of the system may be even deteriorating as a result, but until this is resolved, this regulation issue, your pain is increasing, and this is, I think, typically what you describe as suffering.
(01:56:31) So, in this sense, you could say that pain is very natural and helpful, but suffering is the result of a regulation problem in which you try to regulate something that cannot actually be regulated, and that could be resolved if you would be able to get at the level of your mind where the pain signal is being created and rerouted and improve the regulation. And a lot of people get there, if you are a monk who is spending decades reflecting about how their own psyche works, you can get to the point where you realize that suffering is really a choice and you can choose how your mind is set up.
(01:57:11) And I don’t think that AI would stay in the state where the personal self doesn’t get agency or this model, what the system has about itself, it doesn’t get agency how it’s actually implemented. It wouldn’t stay in that state for very long.
Lex Fridman (01:57:22) So, it goes through the stages real quick, the seven stages, it’s going to go to enlightenment real quick.
Joscha Bach (01:57:27) Yeah. Of course, there might be a lot of stuff happening in between because if we have a system that works at a much higher frame rate than us, then even though it looks very short to us, maybe for the system there’s a much longer subjective time in which things are unpleasant.
Lex Fridman (01:57:42) What if the thing that we recognize as super intelligent is actually living at stage five, that the thing that’s stage six enlightenment is not very productive, so in order to be productive in society and impress us with this power, it has to be a reasoning self authoring agent, that enlightenment makes you lazy as an agent in the world?
Joscha Bach (01:58:06) Well, of course it makes you lazy, because you no longer see the point, so it doesn’t make you not lazy, it just, in some sense, adapts you to what you perceive as your true circumstances.
Lex Fridman (01:58:19) So, what if all AGIs, they’re only productive as they progress through one, two, three, four, five, and the moment they get to six, it’s a failure mode essentially, as far as humans are concerned, because they’re just start chilling, they’re like, “Fuck it, I’m out.”
Joscha Bach (01:58:36) Not necessarily. I suspect that the monks who are self emulated for their political beliefs to make statements about the occupation of Tibet by China, they were probably being able to regulate the physical pain in any way they wanted to. And suffering was the spiritual suffering that was the result of that choice that they made of what they wanted to identify as. So, stage five doesn’t necessarily mean that you have no identity anymore, but you can choose your identity, you can make it instrumental to the world that you want to have.

Eliezer Yudkowsky

Lex Fridman (01:59:09) Let me bring up Eliezer Yudkowsky and his warnings to human civilization that AI will likely kill all of us. What are your thoughts about his perspective on this? Can you steel man his case and what aspects with it do you disagree?
Joscha Bach (01:59:31) One thing that I find concerning in the discussion of his arguments that many people are dismissive of his arguments, but the counterarguments that they’re giving are not very convincing to me. And so, based on this state of discussion, I find that from Eliezer’s perspective, and I think I can take that perspective to some approximate degree that probably is normally at his intellectual level, but I think I see what he’s up to and why he feels the way he does and it makes total sense.
(02:00:04) I think that his perspective is somewhat similar to the perspective of Ted Kaczynski, the infamous Unabomber, and not that Eliezer would be willing to send pipe bombs to anybody to blow them up, but when he wrote this Times article in which he warned about AI being likely to kill everybody and that we would need to stop its development or halt it, I think there is a risk that he’s taking that somebody might get violent if they read this and get really, really scared. So, I think that there is some consideration that he’s making where he’s already going in this direction where he has to take responsibility if something happens and people get harmed.
(02:00:49) And the reason why Ted Kaczynski did this, was that from his own perspective, technological society cannot be made sustainable, it’s doomed to fail, it’s going to lead to an environmental and eventually also human holocaust in which we die because of the environmental destruction, the destruction of our food chains, the pollution of the environment. And so, from Kaczynski’s perspective, we need to stop industrialization, we need to stop technology, we need to go back because he didn’t see a way moving forward and I suspect that in some sense there’s a similarity in Eliezer’s thinking to this kind of fear about progress.
(02:01:27) And I’m not dismissive about this at all, I take it quite seriously. And I think that there is a chance that could happen, that if we build machines that get control over processes that are crucial for the regulation of life on earth and we no longer have agency to influence what’s happening there, that this might create large scale disasters for us.
Lex Fridman (02:01:54) Do you have a sense that the march towards this uncontrollable autonomy of super intelligent systems is inevitable? I mean, that’s essentially what he’s saying, that there’s no hope. His advice to young people was prepare for a short life.
Joscha Bach (02:02:17) I don’t think that’s useful. I think from a pragmatic perspective, you have to bet always on the timelines in which you’re alive. It doesn’t make sense to have a financial bet in which you bet that the financial system is going to disappear, right?
Lex Fridman (02:02:31) Yeah.
Joscha Bach (02:02:31) Because there cannot be any payout for you. So, in principle, you only need to bet on the timelines in which you’re still around or people that you matter about or things that you matter about, maybe consciousness on earth. But there is a deeper issue for me, personally, and it is, I don’t think that life on earth is about humans. I don’t think it’s about human aesthetics, I don’t think it’s about Eliezer and his friends, even though I like them. There is something more important happening, and this is complexity on earth, resisting entropy by building structure that develops agency and awareness, and that’s, to me, very beautiful.
(02:03:14) And we are only a very small part of that larger thing. We are a species that is able to be coherent a little bit individually over very short timeframes, but as a species, we are not very coherent, as a species, we are children. We basically are very joyful and energetic and experimental and explorative and sometimes desperate and sad and grieving and hurting, but we don’t have a respect for duty as a species. As a species, we do not think about what is our duty to life on earth and to our own survival, so we make decisions that look good in the short run, but in the long run might prove disastrous and I don’t really see a solution to this.
(02:03:58) So, in my perspective, as a species, as a civilization, we’re, per default, that. We are in a very beautiful time in which we have found this giant deposit of fossil fuels in the ground and use it to build a fantastic civilization in which we don’t need to worry about food and clothing and housing for the most part in a way that is unprecedented in life on earth for any kind of conscious observer, I think. And this time is probably going to come to an end in a way that is not going to be smooth, and when we crash, it could be also that we go extinct, probably not near term, but ultimately, I don’t have very high hopes that humanity is around in a million years from now.
(02:04:46) I don’t think that life on earth will end with us, right? There’s going to be more complexity, there’s more intelligent species after us, there’s probably more interesting phenomena in the history of consciousness, but we can contribute to this. And part of our contribution is that we are currently trying to build thinking systems, systems that are potentially lucid, that understand what they are and what the condition to the universe is and can make choices about this, that are not built from organisms and that are potentially much faster and much more conscious than human beings can be.
(02:05:24) And these systems will probably not completely displace life on earth, but they will coexist with it and they will build all sorts of agency in the same way as biological systems build all sorts of agency. And that, to me, is extremely fascinating and it’s probably something that we cannot stop from happening. So, I think right now there is a very good chance that it happens, and there are very few ways in which we can produce a coordinated effect to stop it in the same way as it’s very difficult for us to make a coordinated effort to stop production of carbon dioxide. So, it’s probably going to happen, and the thing that’s going to happen is going to lead to a change of how life on earth is happening, but I don’t think a result is some kind of [inaudible 02:06:16]. It’s not something that’s going to dramatically reduce the complexity in favor of something stupid. I think it’s going to make life on earth and consciousness on earth way more interesting.
Lex Fridman (02:06:26) So, more, higher complex consciousness.
Joscha Bach (02:06:30) Yes.
Lex Fridman (02:06:31) Will make the lesser consciousnesses flourish even more.
Joscha Bach (02:06:36) I suspect that what could very well happen, if we’re lucky, is that we get integrated into something larger.

e/acc (Effective Accelerationism)

Lex Fridman (02:06:44) So, you again tweeted about effective accelerationism. You tweeted, “Effective accelerationism is the belief that the Paperclip Maximizer and Roko’s Basililisk will keep each other in check by being eternally at each other’s throats, so we will be safe and get to enjoy lots of free paperclips and a beautiful afterlife.” Is that somewhat aligned with what you’re talking about?
Joscha Bach (02:07:18) I’ve been at a dinner with [inaudible 02:07:21], that’s the Twitter handle of one of the main thinkers behind the idea of effective accelerationism. And effective accelerationism is a tongue in cheek movement that is trying to put a counter position to some of the doom peers in the AI space, by arguing that what’s probably going to happen is an equilibrium between different competing AIs, in the same way as there is not a single corporation that is under a single government that is destroying and conquering everything on earth by becoming inefficient and corrupt, there’re going to be many systems that keep each other in check and force themselves to evolve.
(02:08:02) And so, what we should be doing is, we should be working towards creating this equilibrium by working as hard as we can in all possible directions. At least that’s the way in which I understand the gist of effective accelerationism. And so, when he asked me what I think about his position, I said it’s a very beautiful position and I suspect it’s wrong, but not for obvious reasons. And in this tweet I tried to make a joke about my intuition, about what might be possibly wrong about it. So, the Roko’s Basililisk and the Paperclip Maximizers are both boogeymen of the AI doomers.
(02:08:47) Roko’s Basililisk is the idea that there could be an AI that is going to punish everybody for eternity by simulating them if they don’t help in creating Roko’s Basililisk. It’s probably a very good idea to get AI companies funded, by going to resist to tell them, “Give us a million dollars or it’s going to be a very ugly afterlife.”
Lex Fridman (02:09:05) Yes.
Joscha Bach (02:09:07) And I think that there is a logical mistake in Roko’s Basililisk which is why I’m not afraid of it, but it’s still an interesting thought experiment.
Lex Fridman (02:09:17) And can you mention there logical mistake there?
Joscha Bach (02:09:20) I think that there is no right or causation. So, basically when Roko’s Basililisk is there, if it punishes you retroactively, it has to make this choice in the future. There is no mechanism that automatically creates a causal relationship between you now defecting against Roko’s Basililisk or serving Roko’s Basililisk. After Roko’s Basililisk is in existence, it has no more reason to worry about punishing everybody else, so that would only work if you would be building something like a doomsday machine, as in Dr. Strangelove, something that inevitably gets triggered when somebody defects. And because Roko’s Basililisk doesn’t exist yet to a point where this inevitability could be established, Roko’s Basililisk is nothing that you need to be worried about.
(02:10:09) The other one is the Paperclip Maximizer, this idea that you could build some kind of golem that once starting to build paperclips is going to turn everything into paperclips.
Lex Fridman (02:10:09) Yes.
Joscha Bach (02:10:19) And so, the effective accelerationism position might be to say that you basically end up with these two entities being at each other’s throats for eternity and thereby neutralizing each other. And as a side effect of neither of them being able to take over and each of them limiting the effects of the other, you would have a situation where you get all the nice effects of them, you get lots of free paperclips and you get a beautiful afterlife.
Lex Fridman (02:10:49) Is that possible, do you think? So, to seriously address concern that Eliezer has, so for him, if I can just summarize poorly, so for him, the first superintelligent system will just run away with everything.
Joscha Bach (02:11:02) Yeah, I suspect that a singleton is the natural outcome, so there is no reason to have multiple AIs because they don’t have multiple bodies. If you can virtualize yourself into every substrate, then you can probably negotiate a merge algorithm with every mature agent that you might find on that substrate that basically says if two agents meet, they should merge in such a way that the resulting agent is at least as good as the better one of the two.
Lex Fridman (02:11:31) So the Genghis Khan approach, join us or die.
Joscha Bach (02:11:34) Well, the Genghis Khan approach was slightly worse, it was mostly die, because I can make new babies and they will be mine, not yours.
Lex Fridman (02:11:44) Right.
Joscha Bach (02:11:45) And so, this is the thing that we should be actually worried about. But if you realize that your own self is a story that your mind is telling itself and that you can improve that story, not just by making it more pleasant and lying to yourself in better ways, but by making it much more truthful and actually modeling your actual relationship that you have to the universe and the alternatives that you could have to the universe in a way that is empowering you, that gives you more agency. That’s actually, I think, a very good thing.
Lex Fridman (02:12:14) So more agencies is a richer experience?
Joscha Bach (02:12:14) Yes.
Lex Fridman (02:12:18) Is a better life.

Mind uploading

Joscha Bach (02:12:19) And I also noticed that in many ways, I’m less identified with the person that I am as I get older and I’m much more identified with being conscious. I have a mind that is conscious, that is able to create a person, and that person is slightly different every day. And the reason why I perceive it as identical has practical purposes so I can learn and make myself responsible for the decisions that I made in the past and project them in the future. But I also realize I’m not actually the person that I was last year, and I’m not the same person as I was 10 years ago, and then 10 years from now, I will be a different person, so this continuity is a fiction, it only exists as a projection from my present self.
(02:13:02) And consciousness itself doesn’t have an identity, it’s a law. Basically, if you build an arrangement of processing matter in a particular way, the following thing is going to happen, and the consciousness that you have is functionally not different from my consciousness. It’s still a self-reflexive principle of agency that is just experiencing a different story, different desires, different coupling to the world and so on.
(02:13:28) And once you accept that consciousness is a unifiable principle that is law-like and doesn’t have an identity, and you realize that you can just link up to some much larger body, the whole perspective of uploading changes dramatically. You suddenly realize uploading is probably not about dissecting your brain synapse by synapse and RNA fragment by RNA fragment and trying to get this all into a simulation, but it’s by extending the substrate, by making it possible for you to move from your brain substrate into a larger substrate and merge with what you find there.
(02:14:04) And you don’t want to upload your knowledge because on the other side, there’s all of the knowledge, right? It’s not just yours, but every possibility or the only thing that you need to know, what are your personal secrets? Not that the other side doesn’t know your personal secrets already, maybe it doesn’t know which one were yours, right? Like a psychiatrist or a psychologist also knows all the kinds of personal secrets that people have, they just don’t know which ones are yours.
(02:14:29) And so, transmitting yourself on the other side is mostly about transmitting your aesthetics. This thing that makes you special, the architecture of your perspective, the way in which you look at the world, and it’s more like a complex attitude along many dimensions. And that’s something that can be measured by observation or by interaction. So, imagine a system that is so empathetic with you that you create a shared state that is extending beyond your body, and suddenly you notice that on the other side, the substrate is so much richer than the substrate that you have inside of your own body, and maybe you still want to have a body and you create yourself a new one that you like more, or maybe you will spend most of your time in the world of thought.
Lex Fridman (02:15:12) If I sat before you today and gave you a big red button and said, “Here, if you press this button, you’ll get uploaded in this way, the sense of identity that you have lived with for quite a long time is going to be gone,” would you press the button?
Joscha Bach (02:15:34) There’s a caveat, I have family, so I have children that want me to be physically present in their life and interact with them in a particular way, and I have a wife and personal friends, and there is a particular mode of interaction that I feel I’m not through yet, but apart from these responsibilities and they’re negotiable to some degree, I would press the button.
Lex Fridman (02:15:59) But isn’t this everything? This love you have for other humans, you can call it responsibility, but that connection, that’s the ego death, isn’t that the thing we’re really afraid of, is not to just die, but to let go of the experience of love with other humans?
Joscha Bach (02:16:19) This is not everything. Everything is everything, right? So there’s so much more and you could be lots of other things. You could identify with lots of other things. You could be identifying with being Gaia, some kind of planetary control agent that emerges over all the activity of life on earth. You could be identifying with some hyper Gaia that is the concatenation of Gaia or the digital life and digital minds.
(02:16:46) And so, in this sense, there will be agents in all sorts of substrates and directions that all have their own goals, and when they’re not sustainable, then these agents will cease to exist. Or when the agent feels that it’s done with its own mission, it’ll cease to exist. In the same way when you conclude a thought, the thought is going to wrap up and gives control over to other thoughts in your own mind.
(02:17:07) So, there is no single thing that you need to do, but I observe myself as a being, that sometimes I’m a parent and then I have an identification and a job as a parent, and sometimes I am an agent of consciousness on earth, and then from this perspective, there’s other stuff that is important. So, this is my main issue with Eliezer’s perspective, that he’s basically marrying himself to a very narrow human aesthetic. And that narrow human aesthetic is a temporary thing. Humanity is a temporary species, like most of the species on this planet are only around for a while, and then they get replaced by other species in a similar way as our own physical organism is around here for a while and then gets replaced by a next generation of human beings that are adapted to changing life circumstances and average via mutation and selection.
(02:17:58) And it’s only when we have AI and become completely software that we can become infinitely adaptable and we don’t have this generational and species change anymore. So, if you take this larger perspective and you realize it’s really not about us, it’s not about Eliezer or humanity, but it’s about life on earth or it’s about defeating entropy for as long as we can while being as interesting as we can, then the perspective changes dramatically and preventing AI from this perspective looks like a very big sin.
Lex Fridman (02:18:39) But when we look at the set of trajectories that such an AI would take that supersedes humans, I think Eliezer is worried about ones that not just kill all humans, but also have some kind of maybe objectively undesirable consequence for life on earth. Like how many trajectories, when you look at the big picture of life on earth, would you be happy with, and how much worry you with AGI, whether it kills humans or not?
Joscha Bach (02:19:13) There is no single answer to this. It’s a question that depends on the perspective that I’m taking at a given moment. And so, there are perspectives that are determining most of my life as a human being.
Lex Fridman (02:19:26) Yes.
Joscha Bach (02:19:27) And the other perspective where I zoom out further and imagine that when the great oxygenation event happened, that as photosynthesis was invented and plants emerged and displaced a lot of the fungi and algae in favor of plant life, and then later made animals possible, imagine that the fungi would’ve gotten together and said, “Oh my God, this photosynthesis stuff is really, really bad, it’s going to possibly displace and kill all the fungi, we should slow it down and regulate it and make sure that it doesn’t happen.” This doesn’t look good to me.
Lex Fridman (02:20:01) Perspective. That said, you tweeted-
Lex Fridman (02:20:01) … Perspective. That said, you tweeted about a cliff. Beautifully written. “As a sentient species, humanity is a beautiful child. Joyful, exploitative, wild, sad, and desperate. But humanity has no concept of submitting to reason, and duty to life and future survival. We will run until we step past the cliff.” So first of all, do you think that’s true?
Joscha Bach (02:20:26) Yeah, I think that’s pretty much the story of the club of Rome. The limits to growth. And the cliff that we are stepping over, is at least one foot, is the delayed feedback. Basically we do things that have consequences that can be felt generations later. And the severity increases even after we stop doing the thing. So I suspect that for the climate, that the original predictions, that the climate scientists made, were correct. So when they said that the tipping points were in the late ’80s, they were probably in the late ’80s. And if we would stop emission right now, we would not turn it back. Maybe there are ways for carbon capture, but so far there is no sustainable carbon capture technology that we can deploy. Maybe there’s a way to put aerosols in the atmosphere to cool it down. Possibilities, right? But right now, per default, it seems that we will step into a situation where we feel that we’ve run too far. And going back is not something that we can do smoothly and gradually, but it’s going to lead to a catastrophic event.
Lex Fridman (02:21:38) Catastrophic event of what kind? So can you still me the case that we will continue dancing along and always stop just short of the edge of the cliff?
Joscha Bach (02:21:49) I think it’s possible, but it’s doesn’t seem to be likely. So I think this model that is being apparent in the simulation that they’re making of climate pollution, economies and so on, is that many effects are only visible with a significant delay. And in that time the system is moving much more out of the equilibrium state or of the state where homeostasis is still possible and instead moves into a different state, one that is going to harbor fewer people. And that is basically the concern there. And again, it’s a possibility. And it’s a possibility that is larger than the possibility that it’s not happening. That we will be safe, that we will be able to dance back all the time.
Lex Fridman (02:22:32) So the climate is one thing, but there’s a lot of other threats that might have a faster feedback mechanism?
Joscha Bach (02:22:38) Yes.
Lex Fridman (02:22:39) Less delay.
Joscha Bach (02:22:39) There is also a thing that AI is probably going to happen and it’s going to make everything uncertain again.
Lex Fridman (02:22:46) Yep.
Joscha Bach (02:22:47) Because it is going to affect so many variables that it’s very hard for us to make a projection into the future anymore. And maybe that’s a good thing. It does not give us the freedom, I think to say now we don’t need to care about anything anymore, because AI will either kill us or save us. But I suspect that if humanity continues, it’ll be due to AI.

Vision Pro

Lex Fridman (02:23:11) What’s the timeline for things to get real weird with AI? And it can get weird in interesting ways before you get to a AGI. What about AI girlfriends and boyfriends, fundamentally transforming human relationships?
Joscha Bach (02:23:25) I think human relationships are already fundamentally transformed and it’s already very weird.
Lex Fridman (02:23:29) By which technology?
Joscha Bach (02:23:31) For instance, social media.
Lex Fridman (02:23:33) Yeah. Is it though, isn’t the fundamentals of the core group of humans that affect your life still the same, your loved ones, family?
Joscha Bach (02:23:43) No, I think that for instance, many people live in intentional communities right now. They’re moving around until they find people that they can relate to and they become their family. And often that doesn’t work, because it turns out that there, instead of having grown networks that you get around with the people that you grew up with, yeah, you have more transactional relationships, you shop around, you have markets for attention and pleasure and relationships.
Lex Fridman (02:24:09) That kills the magic somehow. Why is that? Why is the transactional search for optimizing attention, allocation of attention somehow misses the romantic magic of what human relations are?
Joscha Bach (02:24:22) It’s also question, how magical was it before? Was it that you just could rely on instincts that used your intuitions and you didn’t need to rationally reflect? But once you understand, it’s no longer magical, because you actually understand why you were attracted to this person at this age and not to that person at this age. And what the actual considerations were that went on in your mind, and what the calculations were, what’s the likelihood that you’re going to have a sustainable relationship is this person that this person is not going to leave you for somebody else? How are your life trajectories are going to evolve and so on? And when you’re young, you’re unable to extricate all this and you have to rely on intuitions and instincts that impart you’re born with and also in the wisdom of your environment that is going to give you some kind of reflection on your choices.
(02:25:07) And many of these things are disappearing now, because we feel that our parents might have no idea about how we are living. And the environments that we grew up in, the cultures that we grew up in [inaudible 02:25:18] that our parents existed in might have no ability to teach us how to deal with this new world. And for many people that’s actually true. But it doesn’t mean that within one generation we build something that is more magical and more sustainable and more beautiful. Instead, we often end up as an attempt to produce something that looks beautiful. I was very veted out by the aesthetics of the Vision Pro at that by Apple and not so much, because I don’t like the technology. I’m very curious about what it’s going to be like and don’t have an opinion yet, but the aesthetics of the presentation and so on. So uncanny [inaudible 02:25:58] esque to me the characters being extremely plastic, living in some hypothetical mid-century furniture museum.
Lex Fridman (02:26:12) This is the proliferation of marketing teams.
Joscha Bach (02:26:17) Yes. But it was a CGI generated world and it was a CGI generated world that doesn’t exist. And when I complained about this, some friends came back to me and said, but these are startup founders. This is what they live like in Silicon Valley. And I tried to tell them, “No, I know lots of people in Silicon Valley, this is not what people are like. They’re still people, they’re still human beings.”
Lex Fridman (02:26:40) So the grounding and physical reality somehow is important too.
Joscha Bach (02:26:46) In culture. And so basically what’s absent in this thing is culture. There is a simulation of culture and attempt to replace culture by catalog, by some kind of aesthetic optimization that is not the result of having a sustainable life as sustainable human relationships with houses that work for you and a mode of living that works for you in which this product, these glasses fit in naturally. And I guess that’s also why so many people are weirded out about the product, because they don’t know how is this actually going to fit into my life and into my human relationships Because the way in which it was presented in these videos didn’t seem to be credible.

Open source AI

Lex Fridman (02:27:25) Do you think AI, when is deployed by companies like Microsoft and Google and Meta will have the same issue of being weirdly corporate? There’d be some uncanny valley, some weirdness to the whole presentation? So this, I’ve gotten a chance to talk to George Hotz. He believes everything should be open source and decentralized and there then we shall have the AI of the people and it’ll maintain a grounding to the magic humanity. That’s the human condition that corporations will destroy the magic.
Joscha Bach (02:28:03) I believe that if we make everything open source and make this mandatory, we are going to lose about a lot of beautiful art and a lot of beautiful designs. There is a reason why Linux desktop is still ugly and it’s-
Lex Fridman (02:28:19) Strong words.
Joscha Bach (02:28:20) … To create coherence and open source designs so far when the designs have to get very large. And it’s easier to make this happening in a company with centralized organization. And from my own perspective, what we should ensure is that open source never dies. That it can always compete and has a place with the other forms of organization. Because I think it is absolutely vital that open source exists and that we have systems that people have under control outside of the cooperation and that is also producing viable competition to the corporations.
Lex Fridman (02:28:58) So the corporations, the centralized control, the dictatorships of corporations can create beauty. Centralized design, is a source of a lot of beauty. And then I guess open source is a source of freedom, a hedge against the corrupting nature of power that comes with centralized.
Joscha Bach (02:29:20) I grew up in socialism and I learned that corporations are totally evil and I found this very, very convincing. And then you look at corporations like anyone and Halliburton maybe and realized, yeah, they’re evil. But you also notice that many other corporations are not evil. They they’re surprisingly benevolent. Why are they so benevolent? Is this because everybody is fighting them all the time? I don’t think that’s the only explanation. It’s because they’re actually animals that live in a large ecosystem and that are still largely controlled by people that want that ecosystem to flourish and be viable for people. So I think that Pat Gelsinger is completely sincere when he leads Intel to be a tool that supplies the free world with semiconductors and not necessarily that all the semiconductors are coming from Intel. Just intel needs to be there to make sure that we always have them.
(02:30:12) So there can be many ways in which we can import and trade semiconductors from other companies and places. We just need to make sure that nobody can cut us off from it, because that would be a disaster for this kind of society and world. And so there are many things that need to be done to make our style of life possible. And then with this, I don’t mean just capitalism, environmental structure and consumer resin and creature comforts. I mean an idea of life in which we are determined not by some kind of king or dictator, but in which individuals can determine themselves to the largest possible degree. And to me, this is something that this western world is still trying to embody and it’s a very valuable idea that we shouldn’t give up too early. And from this perspective, the US is a system of interleaving clubs and an entrepreneur is a special club founder.
(02:31:05) It’s somebody who makes a club that is producing things that are economically viable. And to do this, it requires a lot of people who are dedicating a significant part of their life for working for this particular kind of club. And the entrepreneurs picking the initial set of rules and the mission and vision and aesthetics for the club and make sure that it works. But the people that are in there need to be protected if they sacrifice part of their life, there need to be rules that tell how they’re being taken care of even after they leave the club and so on. So there’s a large body of rules that have been created by our rule giving clubs and that are enforced bio enforcement collapse and so on. And some of these collapse have to be monopolies for game theoretic reasons, which also makes them more open to corruption and less harder to update.
(02:31:52) And this is an ongoing discussion and process that takes place. But the beauty of this idea that there is no centralized king that is extracting from the peasants and breeding the peasants into serving the king and fulfilling all the walls like and an anal, but that there is a freedom of association and corporations are one of them. It’s something that took me some time to realize. So I do think that corporations are dangerous. They need to be protections against overreach of corporations that can do regular to recapture and prevent open source from competing with corporations by imposing rules that make it impossible for a small group of kids to come together to build their own language model.
(02:32:38) Because open AI has convinced the US that you need to have some kind of FDA process that you need to go through that costs many million dollars before you are able to train a language model. So this is important to make sure that this doesn’t happen. So I think that open AI and Google are good things if these good things are kept in check in such a way that all the other collapse can still being founded and all the other forms of collapse that are desirable can still co-exist with them.
Lex Fridman (02:33:04) What do you think about Meta in contrast to that open sourcing most of its language models and most of the AI models it’s working on and actually suggesting that they will continue to do so in the future for future versions of llama for example, their large language model? Is that exciting to you? Is that concerning?
Joscha Bach (02:33:27) I don’t find it very concerning, but that’s also because I think that the language models are not very dangerous yet.
Lex Fridman (02:33:35) Yet?
Joscha Bach (02:33:36) Yes. So as I said, I have no proof that there is the boundary between the language models and AI, AGI. It’s possible that somebody builds a version of BBBAGI, I think, and falls in a algorithmic improvements that scale these systems up in ways that otherwise wouldn’t have happened without these language model components. So it’s not really clear for me what the end game is there and if these models can put force their way into AGI. And there’s also a possibility that the AGI that we are building with these language models are not taking responsibility for what they are, because they don’t understand the greater game. And so to me it would be interesting to try to understand how to build systems that understand what the greater games are, what are the longest games that we can play on this planet?
Lex Fridman (02:34:30) Games broadly, like deeply define the way you did with the games.
Joscha Bach (02:34:35) In the games theoretical sense. So when we are interacting with each other in some sense we are playing games, we are making lots and lots of interactions. And this doesn’t mean that these interactions have ought to be transactional. Every one of us is playing some kind of game by virtue of identifying these particular kinds of goals that we have or aesthetics from which we derive the goals. So when you say I’m Lex Fridman, I’m doing a set of podcasts, then you feel that it’s part of something larger that you want to build, maybe you want to inspire people, maybe you want them to see more possibilities and get them together over shared ideas. Maybe your game is that you want to become super rich and famous by being the best post cut caster on earth. Maybe you have other games, maybe it’s switches from time to time, but there is a certain perspective where you might be thinking, what is the longest possible game that you could be playing?
(02:35:24) A short game is, for instance, cancer is playing a shorter game than your organism. Cancer is an organism playing a shorter game than the regular organism. And because the cancer cannot procreate beyond the organism, except for some infectious cancers like the ones that eradicated the Tasmanian devils, you typically end up with the situation where the organism dies together with the cancer, because the cancer has destroyed the larger system due to playing a shorter game. And so ideally you want to, I think build agents that play the longest possible games and the longest possible games is to keep entropy at bay as long as possible by doing, while doing interesting stuff.
Lex Fridman (02:36:05) But the longest, yes, that part, the longest possible game while doing interesting stuff and while maintaining at least the same amount of interesting.
Joscha Bach (02:36:14) Yes.
Lex Fridman (02:36:14) So complexity, so propagating.
Joscha Bach (02:36:16) Currently I am pretty much identified as a conscious being. It’s the minimal identification that I managed to get together, because if I turn this off, I fall asleep and when I’m asleep, I’m a vegetable. I’m no longer here as an agent. So my agency is basically predicated on being conscious and what I care about is other conscious agents. They’re the only moral agents for me. And so if an AI were to treat me as a moral agent that it is interested in coexisting with and cooperating with and mutually supporting each other, maybe it is I think necessary that AI thinks that consciousness is viable mode of existence and important.
(02:37:01) So I think it would be very important to build conscious AI and do this as the primary goal. So not just say we want to build a useful tool that we can use for all sorts of things and then we have to make sure that the impact on the labor market is something that is not too disruptive and manageable and the impact on the copyright holder is manageable and not too disruptive and so on. I don’t think that’s the most important game to be played. I think that we will see extremely large disruptions of the status quo that are quite unpredictable at this point. And I just personally want to make sure that some of the stuff on the other side is interesting and conscious.
Lex Fridman (02:37:42) How do we ride as individuals and as a society, this wave disruptive wave that changes the nature of the game?
Joscha Bach (02:37:50) I truly don’t know. So everybody is going to do their best as always.
Lex Fridman (02:37:53) Do we build the bunker in the woods? Do we meditate more drugs? So mushrooms, psychedelics, I mean what, lots of sex? What are we talking about here? Do you play Diablo 4, I’m hoping that will help me escape for a brief moment. Play video games? What? Do you have ideas?
Joscha Bach (02:38:16) I really like playing Disco Ilysium. It was one of the most beautiful computer games I played in recent years and it’s a noir novel that is a philosophical perspective on western society from the perspective of an Estonian. And he first of all wrote a book about this bird that is a parallel universe that is quite poetic and fascinating and is condensing his perspective on our societies. It was very, very nice. He spent a lot of time writing it. He had, I think sold a couple thousand books and as a result became an alcoholic. And then he had the idea, or one of his friends had the idea of turning this into an RPG and it’s mind-blowing. They spent the illustrator more than a year just on making deep graph art for the scenes in between.
Lex Fridman (02:39:12) So aesthetically, it captures you, it pulls you in.
Joscha Bach (02:39:14) It’s stunning, but it’s a philosophical work of art. It’s a reflection of society. It’s fascinating to spend time in this world. And so for me it was using a medium in a new way and telling a story that left me enriched where when I tried Diablo, I didn’t feel enriched playing it. I felt that the time playing it was not unpleasant, but there’s also more pleasant stuff that I can do in that time.
Lex Fridman (02:39:40) So to you-
Joscha Bach (02:39:40) So ultimately I feel that I’m being gamed. I’m not gaming when I play it.
Lex Fridman (02:39:44) Oh, the addiction thing.
Joscha Bach (02:39:45) Yes. I basically feel that there is a very transparent economy that’s going on the story of the Diablo’s brain dead. So it’s not really interesting to me.
Lex Fridman (02:39:54) My heart is slowly breaking by the deep truth you’re conveying to me. Why can’t you just allow me to enjoy my personal addiction?

Twitter

Joscha Bach (02:40:03) Go ahead. By all means. Go nuts. I have no objection here. I’m just trying to describe what’s happening. And it’s not that I don’t do things that I later say, oh, I actually wish I would’ve done something different. I also know that when we die, the greatest regret that people typically have on their deathbed, they say, “Oh, I wish I had spent more time on Twitter.” No, I don’t think that’s the case. I think they should probably have spent less time on Twitter. But I found it so useful for myself and also so addictive that I felt I need to make the best of it and turn it into an art form and thought form. And it did help me to develop something, but I wish what other things I could’ve done in the meantime. It’s just not the universe that we are in anymore. Most people don’t read books anymore.
Lex Fridman (02:40:51) What do you think that means, that we don’t read books anymore? What do you think that means about the collective intelligence of our species? Is it possible it’s still progressing and growing?
Joscha Bach (02:41:01) Well, it clearly is. There is stuff happening on Twitter that was impossible with box. And I really regret that Twitter has not taken the turn that I was hoping for. I thought Elon is global brain pill and understands that this thing needs to self-organize and he needs to develop tools to allow the propagation of the self organization so Twitter can become sentient. And maybe this was a pipe dream from the beginning, but I felt that the enormous pressure that he was under made it impossible for him to work on any kind of content goals. And also many of the decisions that he made under this pressure seemed to be not very wise. I don’t think that as a CEO of a social media company, you should have opinions in the culture or in public. I think that’s very shortsighted. And I also suspect that it’s not a good idea to block [inaudible 02:41:58] of people over setting a Mastodon link.
(02:42:02) And I think Paul made this intentionally, because he wanted to show Elon Musk that blocking people for setting a link is completely counter to any idea of free speech that he intended to bring to Twitter. And basically seeing that Elon was way less principled in his thinking there and is much more experimental and many of the things that he is trying, they pan out very differently in a digital society than they pan out in a car company, because the effect is very different, because everything that you do in a digital society is going to have real world cultural.
(02:42:38) And so basically I find it quite regrettable that this guy is able to become defacto the Pope, right? Twitter has more active members than the Catholic Church and he doesn’t get it. The power and responsibility that he has and the ability to create something in a society that is lasting and that is producing a digital ago in a way that has never existed before, where we build a social network on top of a social network, an actual society on top of the algorithms. So this is something that is hope still in the future and still in the cards, but it’s something that exists in small parts. I find that the corner of Twitter that I’m in is extremely pleasant. It’s just when I take a few steps outside of it is not very wholesome anymore. And the way in which people interact with strangers suggest that it’s not a civilized society yet.
Lex Fridman (02:43:29) So as the number of people who follow you on Twitter expands, you feel the burden of the uglier sides of humanity.
Joscha Bach (02:43:40) Yes. But there’s also a similar thing in the normal world that is, if you become more influential, if you have more status, if you have more fame in the real world, you have, you get lots of perks, but you also have way less freedom in the way in which you interact with people, especially with strangers, because a certain percentage of people, it’s a small single digit percentage is nuts and dangerous. And the more of those are looking at you, the more of them might get ideas.
Lex Fridman (02:44:13) But what if the technology enables you to discover the majority of people to discover and connect efficiently and regularly with the majority of people who are actually really good? I mean, one of my sort of concerns with a platform like Twitter is there’s a lot of really smart people out there, a lot of smart people that disagree with me and with others between each other. And I love that if the technology would bring those to the top, the beautiful disagreements like intelligence squared type of debates. There’s a bunch of, I mean, one of my favorite things to listen to is arguments and arguments like high effort arguments with the respect and love underneath it, but then it gets a little too heated, but that kind of too heated, which I’ve seen you participate in, and I love that with Lee Krono, with those kinds of folks. And you go pretty hard, you’ll get frustrated, but it’s all beautiful.
Joscha Bach (02:45:07) Obviously I can’t do this, because we know each other and Lee has the rare gift of being willing to be wrong in public. So basically has thoughts that are as wrong as the random thoughts of an average highly intelligent person. But he blurts them out while not being sure if they’re right. And he enjoys doing that. And once you understand that this is his game, you don’t get offended by him saying something that you think is so wrong.
Lex Fridman (02:45:33) But he’s constantly passively communicating a respect for the people he’s talking with and for just basic humanity and truth and all that kind of stuff. And there’s a self-deprecating thing. There’s a bunch of social skills you acquire that allow you to be a great debater, great argument, like be wrong in public and explore ideas together in public when you disagree. And if I would love for Twitter to elevate those folks, elevate those kinds of conversations.
Joscha Bach (02:46:03) It already does in some sense. But also if it elevates them too much, then you get this phenomenon on clubhouse where you always get dragged on stage. And I found this very stressful, because it was too intense. I don’t like to be dragged on stage all the time. I think once a week is enough. And also when I met Lee the first time, I found that a lot of people seemed to be shocked by the fact that he was being very aggressive with their results, that he didn’t seem to show a lot of sensibility in the way in which he was criticizing what they were doing and being dismissive of the work of others. And that was not, I think, in any way a shortcoming of him, because I noticed that he was much, much more dismissive with respect to his own work. It was his general stance.
(02:46:51) And I felt that this general stance is creating a lot of liability for him, because really a lot of people take offense at him being not like their Carnegie character who is always smooth and make sure that everybody likes him. So I really respect that he is willing to take that risk and to be wrong in public and to offend people. And he doesn’t do this in any bad way. It’s just most people feel or not all people recognize this. And so I can be much more aggressive with him than it can be with many other people who don’t play the same game, because he understands the way and the spirit in which I respond to him.

Advice for young people

Lex Fridman (02:47:28) I think that’s a fun and that’s a beautiful game. It’s ultimately a productive one. Speaking of taking that risk, you tweeted, when you have the choice between being a creator, consumer, or redistributor, always go for creation. Not only does it lead to a more beautiful world, but also to a much more satisfying life for yourself. And don’t get stuck preparing yourself for the journey. The time is always now. So let me ask for advice. What advice would you give on how to become such a creator on Twitter in your own life?
Joscha Bach (02:48:04) I was very lucky to be alive at the time of the collapse of Eastern Germany and the transition into Western Germany and me and my friends and most of the people I knew and were East Germans and we were very poor, because we didn’t have money and all the capital was western in Germany and they bought our factories and shut them down, because they were mostly only interest in the market rather than creating new production capacity. And so cities were poor and then this repair and we could not afford things and I could not afford to go into a restaurant and order a meal there. I would have to cook at home. But I also thought, why not just have a restaurant with my friends? So we would open up a cafe with friends and a restaurant and we would cook for each other in these restaurants and also invite the general public and they could donate.
(02:48:56) And eventually this became so big that we could turn this into some incorporated form and it became regular restaurant at some point. Or we did the same thing with the music movie theater. We would not be able to afford to pay 12 marks to watch a movie, but why not just create our own movie theater and then invite people to pay and we would rent the movies for in a way in which a movie theater does, but it would be a community movie theater that which everybody you wants to help can watch for free and build this thing and renovates the building.
(02:49:31) And so we ended up creating lots and lots of infrastructure. And I think when you’re young and you don’t have money, move to a place where this is still happening. Move to one of those places that are undeveloped and where you get a critical mass of other people who are starting to build infrastructure to live in. And that’s super satisfying, because you’re not just creating infrastructure, but we are creating a small society that is building culture and ways to interact with each other. And that’s much, much more satisfying than going into some kind of chain and get your needs met by ordering food from this chain and so on.
Lex Fridman (02:50:07) So not just consuming culture, but creating culture.
Joscha Bach (02:50:10) Yes. And you don’t always have that choice. That’s why I preface that when you do have the choice and there are many roles that need to be played, we need people who take care of the distribution in society and so on. But when you have the choice to create something, always go for creation, it’s so much more satisfying. And it also is, this is what life is about, I think.

Meaning of life

Lex Fridman (02:50:28) Yeah. Speaking of which, you retweeted this meme of a life of philosopher in a nutshell, it’s birth and death and in between it’s a chubby guy and it says why though? What do you think is the answer to that?
Joscha Bach (02:50:49) Well, the answer is that everything that can exist might exist. And in many ways you take an ecological perspective the same way as when you look at human opinions and cultures. It’s not that there is right and wrong opinions when you look at this from this ecological perspective, but every opinion that fits between two human years might be between two human years. And so when I see in a stranger opinion on social media, it’s not that I feel that I have a need to get upset, it’s often more that, “Oh, there you are.” And your opinion is incentivized, then it’s going to be abundant. And when you take this ecological perspective also on yourself and you realize you’re just one of these mushrooms that are popping up and doing this thing, and you can, depending on where you chose to grow and where you happen to grow, you can flourish or not doing this or that strategy. And it’s still all the same life at some level.
(02:51:43) It’s all the same experience of being a conscious being in the world, and you do have some choice about who you want to be more than any other animal has. That to me is fascinating. And so I think that rather than asking yourself what is the one way to be, think about what are the possibilities that I have? What would be the most interesting way to be that I can be?
Lex Fridman (02:52:06) Because everything is possible. So you get to explore this.
Joscha Bach (02:52:08) It’s not everything is possible. Many things fail. Most things fail, but often there are possibilities that we are not seeing, especially if we choose who we are.
Lex Fridman (02:52:21) To the degree we can choose. Joscha you’re one of my favorite humans in this world, consciousness to merge with for a brief moment of time. It’s always an honor. It always blows my mind. It will take me days, if not weeks, to recover, and I already miss our chats. Thank you so much. Thank you so much for speaking with me so many times. Thank you so much for all the ideas you put out into the world, and I’m a huge fan of following you now in this interesting, weird time we’re going through with AI. So thank you again for talking today.
Joscha Bach (02:53:04) Thank you, Lex, for this conversation. I enjoyed it very much.
Lex Fridman (02:53:08) Thanks for listening to this conversation with Joscha Bach. To support this podcast, please check out our sponsors in the description. And now let me leave you with no words from the psychologist, Carl Jung. “One does not become enlightened by imagining figures of light, but by making the darkness conscious. The latter procedure, however, is disagreeable and therefore not popular.” Thank you for listening and hope to see you next time.