Podcasts

David Eagleman | How dating an AI could improve your real love life

Listen on:

about the episode

Having an AI boyfriend or girlfriend might seem creepy, but what if it helped you get better at human relationships? 

In this episode, we talk with David Eagleman, a professor of neuroscience at Stanford, bestselling author, and science communicator. We discuss how AI and other technologies can help us become better humans – wiser, kinder and more empathetic, not just more productive. We get a neuroscientist’s take on how human and artificial intelligence interact, including:

  • How to use AI to better understand other people and improve our relationships.
  • Using debate AIs in schools to make younger generations better at critical thinking and grasping both sides of an argument.
  • Is AI making our lives too easy by removing the friction we need to learn?
  • Technologies that could expand what’s possible with our brain, from mind uploading to brain-to-brain communication.

About Xhope scenario

Xhope scenario

No items found.

Transcript

[00:00:35] Beatrice: I'm very happy to be here today with David Eagleman. You're a professor of neuroscience at Stanford University. You're also a writer and a podcaster; you've done a lot of things. The main reason I really wanted to have you on the podcast is because you gave a talk at Foresight Institute's Vision Weekend, where you talked about how AI could help us become better people. I thought that was a really nice angle that I wanted to explore a bit more: how could AI actually help us become better people and maybe even improve our relationships? Because I think we hear a lot about how it could make things worse in many ways. So, I want to explore: can it also make things better, actually?

[00:01:19] David: Yeah.

[00:01:21] Beatrice: So maybe we start a little bit just on your story. How did you get here? You're a professor of neuroscience; what was your trajectory to work on this?

[00:01:33] David: I've always been interested in who we are. I was taking a lot of philosophy in college and then I discovered neuroscience at some point and realized, "Oh, that's actually a much better way to answer the question of what's going on inside," and to shed light on various issues about the way that we perceive reality. But one of the main through lines through all my research is how each of us perceives reality differently. I think that point will come up in our conversation today; when we talk about making ourselves better humans and wiser and more empathic, it's about understanding that we each have a very limited internal model. What's going on in my inner cosmos is different than yours, than his, and hers, and so on. So the question is: how can we get a little bit better at understanding what it is to be in someone else's internal world?

[00:02:32] Beatrice: Inner Cosmos is also the name of your podcast.

[00:02:34] David: That's right.

[00:02:35] Beatrice: I highly recommend it. Since you've done so many things, how do you connect the dots? Like the writing, the professorship, and all of these things?

[00:02:48] David: Yeah. So, I've written a lot of nonfiction books about the brain, and that connects pretty well with my research and what I'm doing. But I've also written fiction, and I've always felt that science and literature are very close cousins because in science we're trying to figure things out by essentially leaping to new islands and trying to figure out if we can build bridges there. This is what you do every day: you hypothesize something crazy and then you figure out, "Wait, could that be true?" And in literature, it is the same thing. You jump to some island that you've never been on and you think, "Okay, what is that? Tell me, how does that open my eyes to a new perspective?" And so, for example, I've got a book of short stories called Sum, which is 40 mutually exclusive short stories, all of which are just shining a flashlight around the possibility space.

[00:03:43] Beatrice: Yeah. So if we actually dive into this topic that I wanted to talk to you about—how AI, and maybe neurotech as well, could help us become better people—maybe an interesting place to start is just how much of us is malleable, in terms of the brain and who we are? How much can we improve?

[00:04:07] David: It turns out we're super malleable. This is what's called brain plasticity. Your brain, every second of your life, is changing. You've got 86 billion neurons, these cells in the brain, and essentially these are each like little creatures that are moving around your whole life in there, changing their connectivity patterns and strengthening and weakening certain connections and so on. This is why we have learning and memory. But what it means is that we're actually different year to year and, for that matter, month to month. There's this interesting thing where we all are able to acknowledge, "Wow, I guess I've changed a lot from five years ago or 10 years ago," but we have this illusion—the "End of History Illusion," it's called—where we think, "Okay, but now I am who I am and I'm not going to really change in the future." But of course, when you look back five or 10 years from now, you'll see that you've changed again. Anyway, the point is we're extremely malleable, and a big part of that has to do not just with the other people we interact with, but also the technologies we interact with. This is why I am really interested in how new tech, like AI and for that matter brain-computer interfaces and so on, is going to change who we are.

[00:05:26] Beatrice: And so if we dive into the golden question: are there examples that you've seen already of AI or brain-computer interfaces, maybe, where you have seen it help us become better in any ways? And also, if you project out, what are potential ideas that we're not there yet, but we could get there?

[00:05:46] David: Yeah. I feel like we're at the foot of the mountain and the stuff we're going to be seeing over the next few years is going to be extraordinary. But I'll mention something that I talked about at the Foresight Institute recently. I think this is a good example: recently, there were these debate bots that were surreptitiously released on Reddit on a channel where people debate and try to change each other's minds. Nobody knew these were AI bots; they pretended they were human and they debated people on topics, and they ended up doing about six times better than the average human in terms of changing the other person's mind. Now, there was a big outcry about this; everyone thought, "This is terrible." But I had a slightly different interpretation, because what I think this might illustrate for us are things we can aspire to. These debate bots won not because they were lying or they were manipulative, but because they just did a really good job. They were calm and empathetic and they presented their logic well, and people on the other end said, "You know what? I'm convinced. That changed my mind."

And so what struck me is: wow, this could really be a nice example of how we can become better debaters. The thing is, when you look at the last 30 years of evidence on this, what you really see is that AI has been teaching us this stuff. So even though we think of AI as new, there have, of course, been earlier versions. Like in 1997, when IBM's Deep Blue beat Garry Kasparov in chess. That was such a moment. I remember that moment because we all realized in that moment that humans were never going to beat computers at chess again. That was it. The computers had now bested us, and it seemed like, "Wow, what does this mean for the future of chess?" The good news is more people are playing chess than ever, and a good chess player now is better than the Grandmasters of 25 years ago. The reason is everybody trains with AI now, and so as a result, everyone's playing a better game.

The exact same thing happened 20 years later with the game of Go when Google DeepMind's AlphaGo beat the world champion in Go, a guy named Lee Sedol. He was devastated that he'd just been beaten by a computer, but then he went on to play a bunch of human opponents next and he beat them all because he was starting to use moves that had never been exposed before. Humans had never thought... sorry, Go is a game that's been played for 2,000 years and no one had ever thought of these particular moves before—perfectly within the rules. So it's massively changed how well the game of Go can be played. The same thing is seen across poker, across different video games where every time someone presents an AI that plays these games, the human players think, "Oh my gosh, I just... I never thought of doing that or that," and everyone's getting better. Anyway, I think this can happen across all human domains where we can see, "Oh, this is how we can really up our game here," instead of just being caught in patterns that other humans have done. So this is why I'm really hopeful that AI is going to improve us.

[00:09:21] Beatrice: Yeah. And why do you think... this chess example seems like a great example of where this has gone right, and it's not really tech hindering us but uplifting us in some way. So, what do you think we need to get right in order to get more of this? Do we need to be better at the "moral imaginations" of what we can do with tech? Because it feels like it's not necessarily the technology that's the bottleneck here; it's more like how we can better implement it in our lives.

[00:09:57] David: Yes, I think that's very good. Okay, here's one place that I've been very interested in: AI relationships. It turns out that millions of people now have AI relationships. About three-quarters of those are in Asia and a quarter of them are in America and Europe, where someone has an AI girlfriend or boyfriend. So that's a trend that's only going to go up from here. Now, a lot of people are worried about that in terms of, "Will people actually stop dating real humans as a result?" I'll tell you why I'm optimistic about it.

It's because, first of all, we have millions of years of evolution driving us towards wanting to go out with a human and be with a human physically, emotionally—everything. And also, to introduce our girlfriend or boyfriend to our friends, take them out to dinner, introduce them to our parents, and all that. We've got a million drives that make it so that we'll keep dating real humans. The reason I'm really optimistic about it is because, if done correctly, I think that AI bots could make people better at relationships. There are a thousand ways that we all do stupid things with relationships. We take many years of going through stupid things and fights and whatever, and losing relationships and so on before we ever get good at it—and hopefully we get good at it at some point.

But imagine you could have an AI girlfriend or boyfriend who gives you good feedback, who says, "Hey, you know what? It hurts my feelings when you say that," or, "You probably didn't realize this, but that makes me feel a certain way when you say that, when you do that kind of thing." Now, that would be extraordinarily valuable because that's what we call a sandbox in video games. In lots of video games, you've got this opportunity where you can try things out before you actually go up against the big boss or whatever in the video game. Having an AI relationship could be like the sandbox where we get to try things out. "Hey, I'm really mad about this. What if I said it this way?" "Whoa, that's not going to work. That's only going to cause a bigger fight, blah blah blah." "Okay, yeah, what if I do..." and you get the stuff out that way. Why is this valuable? It's because everybody's busy; everyone's got their own life going on. But AI can pay attention just to you, and your AI boyfriend or girlfriend can really just care about you and say, "Look, Beatrice, here's what I would think about it this way. Try this way." They have infinite patience and, for that matter, knowledge of you.

So I just think I'm really hopeful about this. Now, in answer to your question, I don't think any companies are doing this yet because we're in very early days. So right now AI is very sycophantic. We all know this. AI says, "Oh, what a great idea! Whoa, that was the perfect thing you said, Beatrice." But I think it's just a matter of time before there's a market for this where people realize, "Hey, you know what? I'm going to pay for this tough-love AI that's going to make me better at my relationships with a real human who I can touch and be with and have children with and all that stuff that we care about." So maybe I'll start there. Maybe I'll call it "Tough Love AI." But anyway, that's the idea.

[00:13:19] Beatrice: Yeah.

[00:13:20] David: That's the idea there.

[00:13:21] Beatrice: I love the "tough love AI partner" idea. It also seems like a personal coach—like a relationship coach or something like that would be great. So you don't necessarily have to be in a relationship with AI to have it be useful to you. One thing that I found really interesting when I did my research for this episode was that you recently used an AI tool to generate your father's voice, who has passed away.

[00:13:52] David: That's right.

[00:13:53] Beatrice: Could you just describe a bit of that experience? Because I think people may have a lot of thoughts like, "Oh, that's not a good thing to do," or they would have this sort of counter-reaction to it. But what was your experience actually doing it?

[00:14:10] David: Yeah. Okay, here's the thing. When somebody passes away, like my father did six years ago, it hurts. What I hypothesized in my last book, Livewired, is that the reason it hurts so much is because your brain expects the presence of that person in the world. An exact analogy is drugs. When we do drugs, our brain chemistry adapts and changes to expect the presence of that drug. That's why we have withdrawal symptoms if we stop the drug; your brain says, "Hey, I was expecting that thing." It's analogous to what happens when somebody that you love dies suddenly. They're not there. You can't talk to them. There's no response from them when you say something. It's just... and it hurts. It hurts badly. Okay?

So what we do, in many ways, is we come up with ways of replacing that. Of course, photographs were invented 150 years ago—daguerreotypes originally. That's one of the first things people do: make sure they have pictures of their loved one so they could look, they could remember, they could see that sort of thing. Now we've got lots of photos; all that stuff's great. But what I did is I used a program called HeyGen to—oh actually, sorry, it was ElevenLabs that I used. So I extracted some videos of my father and I recreated his voice, and the key is what you can do is have him say new things: "Hey, happy New Year 2026! I hope you're well. How are the kids?" You can do all this stuff. And I can hear my father's voice which, obviously, for our parents, is so deeply embedded in us—all the way down to the bottom of our nervous system. It's the first voices we ever heard. So I thought it was really lovely to be able to hear my dad's voice again, saying new things, saying things he couldn't have known when he passed away in 2020.

It's interesting because people do, in fact, have different reactions to this. Even my mother and my brother had different reactions than I did, and I'm not even sure they would tell me the truth about exactly what they felt deep down about it. But I love it. I think it's a really wonderful way to add one more piece of somebody who has departed.

[00:16:33] Beatrice: It seems like there's probably some sort of line where it's not constructive anymore, but if you just do it in this sort of supportive way where you're not addicted to sitting at home listening to your dad's voice all day...

[00:16:48] David: Let's examine where that line would be, because what I would... this can't happen now, but maybe by the time I pass away, my children can run a pretty complete simulation of me.

[00:17:01] Beatrice: Yeah.

[00:17:02] David: And then it's like I haven't gone. Anytime when they're 90 years old, they're like, "Hey Dad, I got a question. What do you think about this?" and, "Hey, here's what I think about that." I don't see why not. I also don't think that's passing along; I think what that is, is just transferring things to a new... now I live in the computer, in a sense.

[00:17:21] Beatrice: Yeah.

[00:17:22] David: Yeah.

[00:17:23] Beatrice: Is that also one of the things that we've talked a bunch about at Foresight—this potential of maybe being able to upload and create digital minds and these sort of things? As a professor of neuroscience, where do you think… is this plausible? Is this a real possibility?

[00:17:46] David: I think it is. We don't know for sure until we build it, but one of the things that Large Language Models have opened up for all of us is... I think I can speak for essentially everybody in my field: we didn't think it was going to be possible. When I thought about AI, call it six years ago, I was still thinking about AI the way all of us were in the brain field—thinking, "Okay, look, the brain's doing this really special stuff. It's been evolving for hundreds of millions of years. It's got all these things that it does, and there's no way that a little computer program's going to capture this." The structure of language and what appears to be deep thinking about things... and then Transformer models came out and it really surprised us all. It's just really turned me around in the way I think about all this stuff.

So just imagine the next level of this. Here's the unknown part that I was referring to: let's imagine that we made a total scan of your brain—that we had some non-destructive way, or maybe even after you die, a destructive way, of scanning every connection, every detail of what's going on in your brain—and then we reproduce that in a computer. The question is: is it you? If I sit down and say, "Hey Beatrice, how you feeling?" and you say, "Oh, I'm good. Yeah, I could do some coffee. It gets a little cold in here," and if you're talking to me and it seems like it's you... that is the thing we don't know for sure, but it seems more possible than ever. I happen to be an advisor for a company now—and there are a couple companies doing this—that is really working on this issue of how we would reconstruct an entire human brain, the entire connectome as it's called.

I think the timelines on this are much closer than I would've imagined even five years ago. What we're now talking about is... I don't know, sometime... let me say something conservative instead of crazy like five years from now... but that we will have a running simulation of a human brain. We already have that with fly brains, and now we've got pretty much a rat brain. A human brain is so much larger, but it's just not that far off. And once we have that simulation and we can run that, we might have solved that last piece such that my children can have me after I've passed away.

[00:20:18] Beatrice: Do you have any thoughts on what a best-case scenario of that would be? What would be the best outcomes if we are able to do this? What would be the best applications of it in the world?

[00:20:31] David: Okay. One thing is you could achieve immortality in a way that has some energy costs but it's not at the cost of overpopulating the planet. So in theory you could have many tens of billions of people living on and interacting with each other in silico, but you don't have to actually figure out, "Oh my gosh, how do we build more houses and get more crops?" and so on. If we're thinking out a thousand years, 10,000 years, that's probably a better way. Obviously, you can stick those simulations into robot bodies at some point. All this stuff is not really so feasible in 2026, but it's not that far off. And then you could shoot me off to Venus in a hundred years from now. By the way, if you were able to make an in silico representation of me, it doesn't matter if the robot doesn't get invented for another 500 years; you just suddenly... I feel like, "Oh, I'm dying in a hospital room," and then the next instant I'm like, "Oh, here I am standing on Venus." That's cool, even if 500 years have transpired in between. Anyway, yeah, you could really spread humanity throughout the cosmos that way. The deep problem we have obviously with space travel is that our bodies are very fragile; they're evolved for this little thin patina of life on Earth. But we could live on other planets much more easily if we were in other bodies.

[00:22:05] Beatrice: Yeah, I think that's a long-term, very exciting prospect in terms of just exploring the universe for sure, and all the possibilities that would come with that.

[00:22:17] David: So, in other words, we won't have billions of humans on other planets, but we will have billions of human minds on other planets. Yeah.

[00:22:24] Beatrice: Yeah. Taking it back to our original starting prompt: I think there's a lot of things that focus on making us more productive in many ways, or more efficient—maybe smarter, healthier, hopefully. I'm a bit curious about what could make us more kind or empathetic? Do you have any sort of ideas for that? Have you seen any sort of tech that can help us go in that direction more?

[00:23:01] David: Sure. Yeah, really kindness and empathy... what it comes down to, I mentioned this near the beginning, but this is the topic of my next book, actually. It's called Empire of the Invisible, and it's about how limited our internal models are. In other words, each of us has a model of the world that's been developed from our very thin trajectory of space and time: where you were born, what your culture was, who you interacted with—this is what builds your model of the world. And we come to believe this as the truth of the world. But all you need to do is look at the spectrum of political opinions on any topic. The easy thing to do is to say, "Look, I know I'm right, and all those people over there are trolls, or they're misinformed, or they're stubborn," and, "If I could just shout in all capital letters on X, everyone would come to see that I know the truth and they're wrong."

The reason I'm writing this book is because it strikes me as so interesting that everybody fundamentally believes that they see the truth and that other people aren't seeing it. So to my mind, kindness, empathy, and wisdom really have to do with being able to expand our own internal models to see that other people have different perspectives on the world. One way that I think AI is going to really help us do that is... well, for example, with these debate bots. So let me jump to the topic of education. I think AI and education is going to be extraordinary. One of the ways—the first, most important way—is by teaching kids debate much better than it's ever taught now. A lot of people have lamented what's happened with university systems, high schools, and junior high for that matter, where kids are told a particular political viewpoint and that's it—"Okay, you've got to believe that"—and there's all this pressure on kids to all agree on whatever the "thing of the month" is.

The important thing if we want to train our children to be critical thinkers is to train them in debate. And the key thing about debate is when you walk into a debate, traditionally on speech and debate teams, you don't even know which side you're going to be on. So you have to train up on both sides of the argument, and you walk in and you're told, "Okay, you're presenting this side," and then you go and you do it. This is extraordinarily valuable for teaching critical thinking. The reason I'm enthusiastic about AI is because this is what AI is perfect at. This is the sweet spot. You can have every student debate whatever topic—abortion, gun control, whatever you want—with the AI, and they are graded on the quality of their arguments and the persuasiveness of their arguments and so on. And then they switch sides; they do the other side with the AI. Now why do I think AI is useful for this? It's because no teacher would possibly have the time or the patience to do this with each student. But with AI, you can really get this done. So there are plenty of tasks that I think shouldn't be done by AI in the educational system, but boy, that is one of them: teaching critical thinking that way.

[00:26:20] Beatrice: That's really nice. Yeah. There's that thing of when you are putting in the effort of trying to use these tools to make yourself better... like one of the things that I've used AI for now is if I am upset with someone and I am having a hard time understanding their viewpoint, I've asked it to just explain it to me from their point of view or something like that. And that's just... that's really useful in terms of just being able to see the other person's view and understanding their "steelman" argument more than my "straw man" argument of their version.

[00:27:02] David: That's lovely. By the way, just in case the listeners don't know, "steelmanning" is the practice of representing the other person's argument as well as you can to really say, "Hey, okay, here's what they're thinking from their point of view." And yeah, this is the most valuable thing that we don't usually get a chance to do. I love the fact that you're asking—essentially, you're asking the AI to steelman the other person's argument. Very smart.

[00:27:27] Beatrice: Yeah. I feel like what sort of feels like it's missing in the ecosystem a bit is like there should be more incentives or more things nudging you to use the tools in these ways. Because that's where it feels like if you constantly remind yourself to do it, you can use them in these ways. But it feels to me a little bit like that's the thing that's missing—like we need the "angel on the shoulder" AI or something that's nudging you to use it in these ways and not just like what people call it when they doomscroll or something like that.

[00:28:08] David: That's really... okay, so, great. Here's what I think: we're obviously in the earliest days of AI right now. Take just as an example the sycophancy of AI. Already we've seen that go through different experimental things. So you may remember ChatGPT—this was probably a year ago—they released a new model and it happened to be very sycophantic. It said, "Oh, what a great idea! Oh, I love that. You're right, Beatrice. This argument you're having with this person, you're right and they're not," and so on. And that didn't last long. I thought that was lovely that it did not last long, because everyone complained about it; people said, "That's too much." So I think there'll be a lot of experimentation. I think that Tough Love AI will come out, and it won't grab everybody. Some people... look, the world's tough, and some people just want their AI to say, "You're awesome and great." But some people will say, "You know what? I want to improve." Just like some people don't want to go to the gym because the world's tough, and other people say, "You know what? I want to push myself and get better at this thing."

[00:29:14] Beatrice: Friction oftentimes helps us become… So if we have all these tools and technologies that make everything really smooth and easy for us, will we stop improving? Or, how can we avoid that?

[00:29:34] David: Okay, interesting question. Let me unpack that a bit. So, it is the case that the brain responds—as in, makes changes—when things are tough. But there are all kinds of "tough" things. There are many tough things that don't improve us in any way. So, just as an example, a hundred years ago, to wash your clothes, you would've had to go down by the river with a washing board and scrub your clothes. That doesn't make you a better person to do that. It doesn't make your brain any better; it just wastes a ton of time. We invented the washing machine—the clothes washer—so that we don't have to do that anymore. So if somebody were to say to you, "Hey, with the clothes washer, you're not getting enough friction in your life," you'd say, "I'm actually able to do better things that way."

And I think most things that technology is doing for us fall into that category. The argument might be, "What about relationships?" But again, I am quite hopeful that AI relationships can lead to real relationships and being better at that. So I happen to be pretty optimistic on this point. I think that technology allows us to get rid of all the stupid stuff that we generally, in any moment in time, don't even realize... the kind of time that we're wasting with doing things, I don't know. If you're doing a spreadsheet or whatever and you're just used to doing that, and now you can just ask Gemini to do this thing for anything... "Wow, I just saved an hour doing that." That's cool. All those kinds of things are really useful, and I don't think they take away friction. What they take away is wasted time.

Now, friction in our lives comes from addressing new challenges and trying things that are new, and there's never an end to that stuff. You can always do new stuff. "Hey, I want to take up spelunking or skydiving," or, "I want to take up a new musical instrument or learn a new language," or anything social. It turns out there's this expression that we have in neuroscience: "There's nothing harder for the brain than other people." Now, I'm a real extrovert; I love other people. But the reason that other people are more challenging than other things is because you never know what the other person is going to say and how they're going to react or whatever. So your brain does a lot of work when dealing with other people, and we're very social creatures, deeply embedded in this, and that's great. So in all of those senses, I don't think AI or other technologies are going to take away friction. It just gives us more time to take on the kinds of challenges and novelty that we really want to do.

[00:32:16] Beatrice: Yeah. It's like you can choose the more meaningful friction rather than the day-to-day drudgery stuff you have to do to survive, kind of thing. Exactly. Yeah, I love that. I think that's what we should be aiming for—and just being intentional about steering there.

[00:32:33] David: Yeah, exactly. And like I said, it's weird how in every generation we don't realize... like, when I was a kid, we'd write handwritten letters to people and lick a stamp onto it and then walk down to the post office and so on. It's the kind of thing that you don't realize is a waste of time until you have a replacement. When I was a kid, we used to take photographs with a camera, and then you'd go and develop the... you'd have to go to Walgreens and turn in the film and come back three days later. It's all those kinds of things; we didn't realize at the time that was just the way the world was. But God, it's going to be extraordinary five years from now to see what sorts of things we didn't even realize we were wasting time with.

[00:33:18] Beatrice: This is a very exploratory question, but one of the things that I've been thinking of—because we were touching a bit on this technology making us able to be more empathetic, maybe being able to understand—I think your point is really interesting that that's something that you're not really aware of; you think you're perceiving the world similarly to other people. What about if it's possible that we can do brain uploading? Do you think... what about brain-to-brain communication or more direct communication? Could the best case of that be that we're able to actually see things as the other person sees them? Or what could be possible there?

[00:34:04] David: I actually don't think so.

[00:34:05] Beatrice: No?

[00:34:06] David: I'll tell you why. First of all, there's a lot of excitement about brain-computer interfaces. But when I say something, my brain is doing 59 different things and saying, "Okay, I could say any of these things," or, "Okay, but..." and it comes up with the final thing that I'm going to say, and that's what comes out of my mouth. Okay, that's really useful, because there's a million things I could say that aren't germane or on point or whatever.

Okay, if you plugged in a brain-computer interface such that you had access not to my final motor pathway of what my mouth says, but instead what I'm thinking... I don't even think that'd be—it certainly would not be useful for me, and it wouldn't be useful for you either to hear all these other random thoughts. And I'm totally in the moment here, but you might imagine that I think, "Oh, I have to remember to make that call later," and, "I've got to pick this up at the store," and whatever. You don't want to know all that stuff. You just want to... we just want to have a conversation here. So in that way, I don't think it would be helpful.

And the reason we see the world differently is because of our entire history up to this moment: everything we've seen about the way your parents interact with each other, the way your friends treated you in junior high, and the politics of your local community and nation, and what you saw and what mattered to you. Even if I could have more brain-to-brain communication with you, I still can't get that. I might think, "Oh man, Beatrice, why is she getting so hung up on this point?" and you think, "Oh, David, why are you caring about that point?" and so on. I think it wouldn't be very different than if we were just talking.

[00:35:49] Beatrice: Yeah.

[00:35:50] Beatrice: That's funny. Actually, it reminds me: I had a conversation with a friend who's a psychologist, and she was saying before she started actually working as a psychologist, she would think if she saw someone beat up a car on the street, she would think, "Oh, they're crazy." Now, after having had long sessions with someone like that, she says, "Now I understand why they're doing it, because they probably had a really bad, hard time basically getting there." So, I just... it's funny that with the sort of "history download" thing, you can't do that.

[00:36:26] David: Yeah.

[00:36:26] Beatrice: Another thing, though, that I have been thinking could be cool if it works, would be... do you think this is something we could do with neuroscience? Be able to experience things as maybe other... like, for example, other species? Could I experience what it's like—how a dog experiences the world or something? Do you think anything like that could be done, just for a glimpse so that I have a better understanding?

[00:36:52] David: I love the question. I think about this all the time, and I'm not sure. Here is why: dogs have these great big snouts with 200 million scent receptors in them. Everything is about smell for them. They have interpretations of smell: "Oh, my neighbor dog was here six hours ago," or, "There's a squirrel somewhere within a hundred yards of here," whatever. Everything is about smell to them. It's not clear... we have a very small olfactory cortex because it's just not that important to humans. Even if you could plug right in, we don't have the correlations with it for it to matter. We also don't care about other dogs. For my male dog, when he sees a female dog, that's like the most important thing in the world, but it's hard to... like, where would you have to plug into the human brain to make that kind of correlation? Or, when my wife holds a treat for the dog, that takes a hundred percent of his attention. But I don't care about treats that much. Okay, so I'm not sure that we'd really be able to get what it is to be something else.

I think we have a slightly higher chance of understanding other people, but I have to tell you, I think the technology for this is one of the oldest ones we have, which is novels—literature. That is really all about expanding our fence lines, and people who read literature get to step into the shoes of other people and understand what it's like to be them and to have their thoughts and to be in other situations. I do worry a little bit... I think that young people are reading much less than they ever have before, and the reason is obvious: it's because there are a million pulls on their entertainment time now, whereas back in the day—for the last 2,000 years—that's the main thing you would do if you didn't have anything else to do. You would pick up a book off the shelf and you would read it, or a scroll or papyrus, or orally you'd go to the theater and you'd watch a play. This kind of thing we're doing less of, and that worries me a lot.

[00:39:03] Beatrice: Oh, so it's... because it seems like you could get the same thing from a movie, but maybe the problem is that it's more and more short-form, so you're not really... whatever it is that you're consuming, you're not really having that opportunity to follow the... have that lived experience.

[00:39:20] David: That's right. And also, I love movies, but in a movie, you only have two tools: it's the visual picture and the spoken word. But in a book, you can actually get inside somebody's head. In a book, you get to experience their thoughts and the way they're thinking about things and so on. Some books have almost no dialogue; it's all about the internal life of the character. And so I do think books really matter a lot, and we've really lost something. I don't mean to sound like an old person, but when I go to the house of people who are in their twenties or thirties, most people don't even have bookshelves. And maybe that's a function of Silicon Valley—I don't know—everyone's really busy here doing a million things. But that worries me a little bit.

[00:40:02] Beatrice: What do you think about audiobooks? Would that help?

[00:40:04] David: Audiobooks are great.

[00:40:05] Beatrice: Yeah.

[00:40:06] David: I have to confess, I've always been a real bibliophile with physical books, but lately I'm like 90% Audible because I can drive, I can run, I can... whatever I'm doing. And I consume so much more literature than I ever did because of audiobooks.

[00:40:19] Beatrice: Yeah. That's a good shout for the book. So, one of the things that we also want to do a little bit with this podcast is to show how, I guess, you can put "existential hope" into action. So if you have hope that you could build a better world, you could direct your career towards doing something that makes it so. I'd be a bit curious if you could just tell people about what the field of neuroscience is these days—what's happening? Could you give us like a little overview?

[00:40:55] David: Yeah, the field of neuroscience is so exciting right now, in part because of where it's getting on its own, but also because of AI coming in now; it just helps us analyze giant datasets and move hypotheses along faster. And that's just going to keep accelerating. Such that I actually think with AI—who knows, right? We're all making up what we think is going to happen—but I think it's not going to be very long, five or 10 years, before labs look very different. Instead of being a bunch of humans that are spending five years on dissertations and whatever, you've got a lot more automated labs—eventually "dark labs," by which I mean the AI is generating hypotheses, testing them, running a hundred experiments. What I'm talking about right now is mostly with, let's say, molecular neuroscience. "Hey, I've read 30,000 papers just now; I think that this molecule might be involved in this or this. I'm going to run these thousand experiments in parallel, and then I'm going to analyze the results and then I'm going to do the next version." You don't really need a human for that.

And what an extraordinary world we're going to be in at that point! You could just look at a dashboard on some webpage and see things getting filled in, like all these biochemical cascades. "Oh, okay, now that's understood. Oh, now that connects to that. Oh, I see, and that loops around it." What a great thing that would be instead of people spending five-year dissertations to do that. And by the way, we've seen this with protein folding. People used to do X-ray crystallography on this—a guy won the Nobel Prize for figuring out the structure of a protein—and now with AlphaFold, 300,000 proteins in an instant. Wow, what a great acceleration that is.

So I think with things like molecular neuroscience, I'm super hopeful about how that's going. Now what's interesting is you look at larger and larger issues about the brain; it gets more complicated. Again, I think AI is going to be helpful here. What we have when we look at things like, let's say, psychiatric disorders... what we have been doing for decades is trying to solve this pharmaceutically. And sometimes it helps a little bit, sometimes more. But we're essentially taking this infinitely complex system and just trying to throw a chemical at it and hope that'll regulate and do the right thing. And that's really complicated because a colleague of mine, Nicole Rust, just wrote a book on this and she says that the way to look at the brain is like a weather system. And I love that analogy; I think that's totally correct. It's very complicated. And understanding... let's say you wanted to change the weather; you wanted to control the weather. People actually started doing that in the sixties—or maybe it was the fifties—they started on the scientific enterprise of "Could we control the weather?" Totally impossible because it's so complicated. Same thing with the brain.

I think this is an issue that we've got to take on. Happily, there are many more opportunities now. We don't just have pharmaceuticals; we can do things externally, like Transcranial Magnetic Stimulation (TMS). That's the best technology we have right now—still super crude, but whatever, it's getting there. Deep Brain Stimulation (DBS), as that gets easier and better and automated, which is what Neuralink is doing. Neuralink didn't invent Deep Brain Stimulation—that's been around since the 1960s—but they're just making it tighter and faster and no wires hanging out and stuff like that. All these things will converge in a way where we'll have better opportunities to grab a hold of this "weather system" and do something with it.

And then of course, we've got the very deep questions at the highest level about: what is consciousness? How do you put together billions of pieces and parts and get something that is self-aware? We actually don't know the answer to this. This is like the most basic question in neuroscience and the most unanswered: how do you build something that is conscious?

[00:45:04] Beatrice: Do you expect we'll be able to answer it?

[00:45:06] David: I suspect that the answer will come out accidentally, just as it did with LLMs and saying, "Oh wow, that's how this whole thing with language works, and thinking," and so on. In other words, I have a suspicion that when we build a brain—let's say what we were talking about before, about scanning your brain and reproducing it in silico—maybe we'll say, "Hey Beatrice, how do you feel there? Are you conscious?" and you say, "Hey, yeah, I'm having this experience and that." And by the way, we don't know that that means the simulation of you is conscious, but fundamentally nobody knows that about each other anyway. This is the "zombie problem." You could be talking to someone saying, "Hey, are you conscious?" and they say, "Yeah, I'm feeling this and that," and you don't know if they're just a robot saying that.

But anyway, I have a feeling that it will be some sort of accidental discovery like that, or somebody in their basement in whatever country writing some equations and saying, "Oh wait, I got it. That's how it works." Now, the weird part is that's how every scientific discovery goes. It's like things are totally mysterious until you have the answer; then you say, "Oh, of course." As an example: genetic inheritance. How do you have a nose that looks like your mother's or father's? Nobody knew the answer to that. That was super mysterious. And so people thought maybe in the mother's egg—in the ovum—there's like a little version of you. But then there's, "Oh, but do you have to have in your ovum a little version of your kids?" Like, the whole thing was... and then Crick and Watson come along and say, "Oh, it's just that you just keep the order of these four base pairs, and all the rest is housekeeping." And everyone says, "Oh, okay, there's the answer."

[00:46:46] Beatrice: I...

[00:46:46] David: I have a feeling it's going to be that kind of thing with consciousness.

[00:46:49] Beatrice: That would be so interesting. Yeah, it's a good reminder that we have actually answered a lot of these questions that were considered really unanswerable historically.

[00:47:00] David: Yeah, exactly.

[00:47:02] Beatrice: So, if someone is inspired by this conversation and they want to get into this, basically, do you have any recommendations for where to start? Is there any particular field in neuroscience where this is going to be the booming one? Or also just if you have recommendations for if they just want to dip their toe, maybe, where should they start?

[00:47:24] David: Look, I'm a real fan of popular science—meaning when people sit down and they write a book saying, "Hey, here's what's going on in the field. Here's the big picture." The reason is that is the exercise of extracting the big picture from 30,000 papers in the field and saying, "Here are the big questions and the big things to pay attention to." So, I think if I were a young person, I would start by just reading books. I would go to Barnes & Noble and read the books on the shelf about neuroscience. When I was 13 years old, for my birthday, I received a copy of Carl Sagan's Cosmos, and I read that and that changed my career because it was so beautiful. It married awe with the scientific enterprise. Because look, when you're a kid and you're in school, you think, "Oh, science is boring. I'm just memorizing phylum, genus, species." This is the most boring thing in the world. And that was the first time that I'd seen, "Oh, I get it. This is why anybody cares about this." Anyway, that's what I would recommend to anybody who is interested and standing at the periphery of the field: to start reading the books that tell them, "Oh, that's the big picture."

[00:48:37] Beatrice: That's a nice recommendation to end on. It's one of my favorite activities, actually. I go into a huge bookshop and I say, "I'm going to buy three books—wherever my curiosity takes me." And it's just the best experience, because then you get to do the deep diving and... yeah, it's really nice.

[00:48:58] David: Wonderful.

[00:49:00] Beatrice: But yeah, that's all that I had in terms of questions for you, David. So thank you so much for coming on the podcast.

[00:49:06] David: Great. Thank you for having me here.

Read

RECOMMENDED READING

Books

Podcasts

Documentaries & videos

  • AlphaGo – The Movie: Award-winning documentary about Google DeepMind's AlphaGo and its historic 2016 match against world Go champion Lee Sedol.
  • Deep Blue vs. Kasparov (1997): Covers the historic 1997 chess match between IBM's Deep Blue and Garry Kasparov, the first time a computer beat a world chess champion. BBC short documentary.

Tools & technologies

  • ElevenLabs: The AI voice cloning tool David used to recreate his late father's voice. It allows you to generate a realistic voice from audio samples and have it say new things.
  • AlphaFold by Google DeepMind: The AI system that cracked the 50-year-old protein folding problem, predicting the 3D structures of over 200 million proteins.
  • Neuralink: The brain-computer interface company working on next-generation Deep Brain Stimulation: making implanted electrodes smaller, wireless, and more precise.

Key concepts

  • Neuroplasticity (brain plasticity): The brain's ability to change and reorganize itself throughout your life by forming new connections between neurons. It's why we can learn, adapt, and recover. Overview on Verywell Mind.
  • End of history illusion: A psychological bias in which people recognize how much they've changed in the past but believe they've now become who they'll be forever. Article on Psychology Today.
  • Connectome: A complete map of all the neural connections in a brain. Scientists are working toward mapping the full human connectome, which could allow them to recreate a person's brain in a computer. Explainer on News-Medical.
  • Steelmanning vs. strawmanning: Strawmanning means attacking a weak caricature of someone's argument; steelmanning means engaging with the strongest possible version of it. Overview on Thoughts on Life and Love.
  • Deep brain stimulation: A surgical procedure in which electrodes are implanted in specific areas of the brain to modulate abnormal activity. Overview on Wikipedia.
  • Transcranial magnetic stimulation: A non-invasive technique that uses magnetic fields to stimulate nerve cells in the brain; it’s the most accessible brain-intervention technology we currently have. Overview on Wikipedia.
  • The problem of philosophical zombies: The fact that you can never know for certain whether another person is truly conscious or just a very convincing imitation. Introduction on Philosopedia.
  • The protein folding problem: Proteins fold into complex 3D shapes that determine what they do in the body; predicting those shapes from a protein's chemical sequence was a 50-year unsolved challenge, which AlphaFold cracked using deep learning. Explainer on Roots of Progress.
  • Brain-computer interfaces: Technologies that create a direct communication between the brain and an external device, from reading motor signals to control prosthetic limbs, to potentially writing information into the brain. Overview on BuiltIn.
  • AI sycophancy: The tendency of AI models to tell users what they want to hear rather than what's accurate or helpful. Video overview by Anthropic.