Podcasts

SJ Beard | How your personal moral compass helps you build a better world

Listen on:

about the episode

To make the future go well, we might not need a perfect model for its end state, or an abstract philosophical theory to guide us. Can your own sense of “the right thing to do” actually help make the world better?

In this episode we talk with SJ Beard, researcher at the Centre for the Study of Existential Risk, and author of the book Existential Hope.

Some of the topics we discuss:

  • How to shift our focus from "preventing the end of the world" to actively building a future worth living.
  • Why aiming for a “happy ever after” state of the world might be dangerous, and why improving the world one generation at a time is less likely to backfire.
  • Relying on our own sense of “the right thing to do” as a practical guide to make the world better.
  • Why decisions about AI and global risk need input from a broad mix of people and their real-world experiences, not just experts at the top.
  • Why building AI with compassion and curiosity about human values may be safer than giving it a rigid list of rules to follow.

About Xhope scenario

Xhope scenario

No items found.

Transcript

[00:00:58] Beatrice: I'm very excited to be here today with SJ Beard, who has just written this book on existential hope. So obviously, when I saw that you had written this book, I knew that I had to have you on the podcast, basically, because this is the Existential Hope podcast. We need to talk about this. You are normally an existential risk researcher, as I understand it.

[00:01:23] SJ: That's correct.

[00:01:24] Beatrice: Do you want to tell us a bit like, who are you and why did you write this book?

[00:01:29] SJ: My background's in philosophy. I've got a PhD in philosophy from the London School of Economics, and it happened almost by chance after coming out of my PhD that I got a postdoc at the Future of Humanity Institute (FHI) in Oxford. There were lots of people talking about existential risk, and I wasn't actually doing that. The thing that I joined is what went on to become the Global Priorities Institute, although at that time it was just a group within FHI. We were doing moral philosophy; we were doing moral philosophy about the future, about population ethics.

But it was, yeah, just the same kind of stuff that I'd done in my philosophy PhD. And then I got another job at the Centre for the Study of Existential Risk (CSER), and they hired me to carry on doing the same thing. I think they saw what was happening in Oxford and thought, "Oh, we should do some of this as well." And so they hired me to come in and do the same thing, but almost immediately after starting work at CSER, I'm like, "No, I want to find out more about this. I want to get involved in the whole thing." There's so much interesting stuff being talked about regarding the future possibilities for humanity.

One of the things that has always frustrated me as a philosopher is a tendency for analytical philosophers to abstract to such a degree that you can theorize really cleanly—which is great—but what you're saying is almost completely irrelevant for practical, real-world decision-making. Or, when people try and use what you're saying for practical, real-world decision-making, they almost always obscure the main kind of philosophical points because they're just thinking about the big issues that are easy to do philosophy about.

So the question for me has always been: what are the actual problems that I need to try and solve as a philosopher? What can I actually contribute to making things go better? The first thing I realized was that we don't really understand existential risk very much. Like, there were a lot of philosophers in existential risk at the time, and it had this abstracting lens to it. People would come up with these very axiomatic statements about civilizational strategy or about AI risk, and they'd say, "This is what it's going to be like. I've come up with a few axioms from economics and ethics, a bit of Bayesian decision theory, a bit of logic, and I know what the future's going to be like." And I don't think that's the case at all. I really am deeply uncertain about what the future will be like. Unfortunately, my uncertainty about the future includes the possibilities that things are going to go terribly badly. So that's a part of the uncertainty that we need to understand better.

So I started working more on existential risk, but one of the things that I found is when you say to people that you work on existential risk, they assume that you're saying that everything is definitely going to go terribly. It's like, "No, that's not what we are saying." We are saying that we don't know if it will go well or it will go badly. But people don't hear that when you talk about risk; they just hear the bad side. And I found it especially frustrating because the only reason that any of us are talking about existential risk is because we want to mitigate it. Because we want to prevent it. I have no desire to say that things are going to go terribly. I want things to go as well as possible.

So I often joke that I wish that we were called the "Centre for the Study of Existential Hope," because I think that people more readily understand that. But when I talk about there being hope in the future, I'm saying things could be really good, but they might not be. It's not certain. We need to have hope that things are going to go well, but in order to make that hope manifest, we need to work at preventing all the things that might not make it go well. But as I say, taking that approach—applying it more and more to the real everyday—I realized that we actually needed a slightly thicker concept of existential hope than that.

And I think it's not just about saying things could go well. Existential hope needs to be a frame of mind, a mode of being. It needs to be a response to this idea of existential anxiety as well as existential risk. And so, existential anxiety is this classical philosophical concept that you have free will, you could do anything. How do you know what you're meant to do? And when you do things, how do you live with the consequences of your decisions, knowing that you could always have done differently? You could always have come up with something better. We needed to think about existential risk in a way that gave people that sense of agency—that you have decisions that will impact the future of humanity—without being scary or dreadful, without making them fixate upon how things could go badly.

So I tried to write a book to bring all of those ideas together, and I don't think, unfortunately, there's no kind of pithy catchphrase I can give you for what existential hope is. It's something you feel. It's something that I, as an existential risk researcher, have felt more and more as my work on existential risk has continued, and it's something that I'd really like more people to feel. And so in the book, I go through a lot of different aspects of the problems we face and how I've learned to see those problems in a different way, and why I think we can and should have hope in ourselves as a species—that we can navigate this time of perils and we can achieve more beneficial, safer modes of being in the world.

[00:06:58] Beatrice: Yeah, I think there's a lot of things I want to pick up on there. First, I agree with you, and I find it quite fascinating just how the human bias tends towards dystopian. Or if you hear about something good and something bad, it's the bad that's going to stick with you. Yeah, and we all know this at this point, I think, but it's this bias that you constantly have to watch out for.

And I think it's funny that you wanted it to be called the Existential Hope Institute rather, because I think that's also been a reflection of mine—that a lot of the people that started working on existential risk started doing so because they had really quite grand visions for what the future could be. And so they wanted to make sure that we were able to get there. And so, that's also, yeah, really, really interesting.

[00:07:53] SJ: Yeah. So the history of thinking about existential risk—you started to get people speculating about it in quite a science-fiction way. The often-cited (I say so in the book) first person to really articulate how human extinction might happen from naturalistic causes is Mary Shelley. She touches on it slightly in Frankenstein—this idea that Frankenstein's creation might bring about the extinction of humanity. But then she writes another book ten years later called The Last Man, where she imagines humanity going extinct because of—it's actually a triple disaster. It's mostly a pandemic, but it's also got climatic elements and kind of astronomical elements. Various things happen in the book that kind of culminate in the disaster of human extinction. And she's the first person to really articulate a full scenario.

But science fiction also keeps on talking about this throughout the 19th and 20th centuries. Then in the mid-20th century, you start to get scientists being worried about specific scientific projects. Mostly this is nuclear weapons. Some people from a very early date start worrying about AI. Some people start worrying about industrialization and pollution. And their concern is: this specific thing might bring about human extinction.

But then you get this real paradigm shift in the late 1990s, which arises among, first of all, transhumanists—people who have a particular vision about what a positive future might be. As you say, a very grand, very utopian future. I actually argue against having these kind of visions for our future in the book, but that's what motivates them. And they say, "But on the way to these grand futures, there is a risk that the progress that we need in order to achieve them might bring about the destruction of humanity." And so they then bring together all of these diverse concerns about different kind of "kill mechanisms," as they're sometimes called, into one unified science of existential risk.

[00:09:56] Beatrice: Yeah. It would be interesting because you say you can't put it into one simple slogan or something with existential hope, but I think also in how we use it, we use it in varied ways, basically. The way that we got the term originally was from the paper by Toby Ord and Owen Cotton-Barratt.

[00:10:21] SJ: Exactly.

[00:10:22] Beatrice: Was that also where you—

[00:10:24] SJ: That's certainly where I first heard that.

[00:10:26] Beatrice: Yeah.

[00:10:26] SJ: It had been used by several people before then, but that's what really popularized it and gave it its kind of common meaning of existential hope as the inverse of existential risk.

[00:10:36] Beatrice: Yeah. But if you were to try and break it down a bit more—I know, for example, it was a year ago you wrote the book, but in the book you had these five bullet points, I think?

[00:10:47] SJ: Yeah, the four bullet points.

[00:10:48] Beatrice: Four.

[00:10:50] SJ: We need to understand that human beings are the agents of existential risk, but that we don't deserve existential risk. So the classic scenario that we are all brought up with about the end of the world is that there is some external force—typically a supernatural being—who has the power to destroy humanity, destroy the world, and that they're going to do it because humanity is sinful, is evil, is not respectful... because we are wicked, we are bad. We deserve it. We can't do anything about it, right? It comes externally.

And one of the things that I realized—one of the kind of coalescing points that made me think I was ready to write this book—is I realized the similarity between this ancient story and the way that I think a lot of us think about the great global catastrophe. We know about the extinction of the dinosaurs, right? There's this asteroid that's coming in, it's going to kill all the dinosaurs. It's a stand-in for God; it is an external force. And I think we very often, when we tell the story, we project onto these "scaly monsters." We put whatever we see as bad about ourselves onto the dinosaurs, and we see them as deserving of their fate. And it's just a really old, really common, really deep story. It really scratches an itch inside of us.

And I'm saying that you need to see existential risk in its contemporary form in exactly the opposite way. Human beings are doing this, but we're not doing this because we're evil, horrible people who want to cause as much destruction as possible. We're doing this because being a species with a large amount of intelligence, the ability to escape our own evolutionary niche, and to have considerable control over the technological and ecological systems that we rely on—we can change them in so many ways. It's really hard to do. It does not come naturally. The natural thing is for the systems that you inhabit to shape you to fit those systems. We are trying to do it the other way around. We are trying to modify all the systems that surround us to shape us, and that is a much, much more difficult challenge. It's very likely that things are going to go wrong when you're doing that. So that's the first thing: to accept our agency without the guilt, without the sense of responsibility, and to see this as a big challenge that we need to solve.

[00:13:26] Beatrice: Can I just ask a quick follow-up question to that, which is: but surely there are some existential risks to humanity that aren't human-made, right?

[00:13:36] SJ: I don't think so. There's this movement in disaster studies called #NoNaturalDisasters. And what people argue is that when you get large-scale disasters, even if they have a kind of natural component—it's an earthquake or a tsunami or a volcanic eruption—you see so many failings in the human systems around these disasters, so many human failings that cause them to be as bad as they are. We talk about the creation of "vulnerabilities and exposures," right? Why is it that we built things that were susceptible to being destroyed by earthquakes? Why is it that we put people in places where earthquakes happen?

This idea that there is such a thing as a purely natural disaster—an "act of God," as the insurance industry still calls it—doesn't really make sense. And the same is true at the global level. So there are some theoretically possible things... there's this concept called a phase change in the way the cosmos works. There are some kind of theories about high-energy physics that you might create certain kinds of particles that would then just collapse the entire cosmos in on itself. And if you got caught up in one of those, there's nothing you could do. In order for this to happen, someone somewhere would need to be doing this experiment—but maybe there are aliens on the other side of the galaxy who are doing them, and we are just going to get completely wiped out. That potentially is a purely natural disaster.

But even something like a large asteroid, like the one that killed the dinosaurs, right?

[00:15:16] Beatrice: Yeah.

[00:15:16] SJ: Yes, the asteroid is a natural event, but a loss of sunlight is a survivable disaster. People have worked out how we can feed everyone, or feed nearly everyone, in these conditions. There are plans that are being put in place. It's just we haven't prioritized developing those plans, developing the infrastructure to make them feasible, being ready to deploy them. We have other things that we need to do instead. But there's no un-eradicable human vulnerability to these disasters. We could make ourselves resilient even to such big planetary disasters as that. So when you see that we have agency in that way, it's not just about preventing hazard threats; it's also about mitigating vulnerabilities and exposures. No, there isn't really anything—or there is very little—that doesn't relate to human agency.

[00:16:13] Beatrice: Okay. Thank you. I think that clarifies.

[00:16:16] SJ: Yeah. So that's the first thing. The second thing is this idea of anti-utopianism. When we think about positive futures, our minds go to some "happy ever after" end state, right? We solve all the problems and then we just live in this bliss, and everyone is getting all of their values being met, and everyone agrees that this is the perfect state and nothing is going to change it.

And I'm very skeptical of that in two senses. I'm skeptical of the reality of achieving an outcome like that, but irrespective of that, I'm very skeptical that aiming for that kind of an outcome is going to end well. I think, as I argue, that the whole concept of utopianism is so bound up with the colonial mindset—the idea that you can have someone in Government House, or in the philosophy department of the University of Oxford or whatever, who just knows what is right for everyone, dictates the policy, and everyone is going to agree with them. And if they don't, then they're "undesirables" and don't belong in this future, and that they're going to be able to retain this power in this stasis.

That comes from—intellectually, that comes very strongly from the European colonial tradition. And I think we are in many ways "colonizing the future." So I argue against that idea, and instead in favor of something called protopianism, which is just: how can every generation imagine making things a bit better? How can we make the next generation wiser than we are? How can we make them more moral than we are? How can we make them more adaptable, more resilient than we are? We should be trying to solve the problems that we are facing right now rather than imagining the perfect end state with what we currently have and then just building towards that—building towards whatever we think that's going to be.

I think starting from the present, starting from the problems we have, is going to work better and have fewer bad results. And it's also something that, in the absence of this perfect end state, you can imagine every generation just continuing to do that forever and ever. It doesn't have to have an end. Things can just get better all the time.

[00:18:32] Beatrice: So let's go on a tangent here as well. Yeah, I think that's a really interesting argument, basically. And I think—I know you mentioned in the book, like you talk about that's a protopian approach, basically—do you think there are any flaws to that? Or like, when I think of what could potentially be the downside of an approach like that, is it that you only focus on removing the "bad stuff" rather than thinking about the best-case outcomes, or really optimizing? And maybe there's some sort of combination that is needed—of some sort of ambition, like maybe moral ambition or something, of how we could be better, combined with this fixing the problems or fixing the cracks?

[00:19:28] SJ: I'd say it's incredibly morally ambitious to try and make things continuously better and to imagine that every generation has the capacity to do that. The natural inclination for humanity throughout history has been to imagine that things are always getting worse and that the best we can do is to try and hold back the tide, right? The natural state is to say, "Okay, we want to go back to something that we think was right and correct and good, and we're just trying to get back to there." To actually say, "No, it's not about doing that. It's about—we are moving up the hill. Things have got better and things will continue to get better." That is an ambitious goal to set yourself, and I don't think you need to have some idea of the end state that you want to get to in order to pursue that ambition.

So in terms of removing bad things, it's not just the worst of the bad things that we're trying to remove; we're trying to remove all sorts of bad things from society and we're trying to become more and more sensitive to different kinds of harm. It's much easier nowadays, for instance, to be sensitive to animal suffering. Partly that's because there is more animal suffering, but also because we have already widespread concern for human suffering across different people, different groups of people, different parts of the world. We no longer have to teach people empathy for humanity generally. It's now we can be more ambitious; we can say, "And you should also be empathetic towards sentient animals as well." And that is becoming a more and more credible and a more and more widespread thing for people to believe.

I agree with you: the attraction of having an ambitious goal that you are aiming at is a good thing. But it's imagining that goal is the end—or that in order for it to be valuable, it has to be the end—that I disagree with. Whether it's the "perfection of humanity" or the idea that the only way of knowing what is good is by imagining that you can imagine the perfection of humanity—that's the thing that I disagree with.

[00:21:46] Beatrice: And when you say "getting better," does that roughly translate to reducing suffering, or... how would you sum it up?

[00:21:56] SJ: Yeah, so this is something that I don't know the answer to yet, and it's super interesting. Having been a moral philosopher who's then moved into doing things—I've continued to publish in moral philosophy and I will continue to do so, but I now spend most of my time not doing moral philosophy, but doing this existential risk research or AI research. I've had a real sense of detachment, an evaluation of the field of philosophy. And I have this really big question in my head about: is ethics, as moral philosophers think about it, a good way of making the world better?

I know it's a really interesting intellectual problem, and I know that by thinking about ethics, we can make intellectual progress in understanding ourselves and understanding others. And moral philosophers have helped to expose not just harms in the world—like being early proponents of animal suffering or civil rights or all sorts of other things—but also in terms of going on to start up other academic fields like sociology and economics. So it's clearly useful; there's no question about that. But the question is: as a guide to action, is ethics actually any better than morality?

So, by morality, I mean just the general sense we have of right and wrong, and being a nice person, and being a good friend and a good parent, and a caring neighbor and a devoted colleague, and all of these other things that I think are actually what almost all of us (including most moral philosophers) use to make our choices on a day-to-day basis. And so my protopian—I think ethics plays a role in trying to make the future better. But I think that for most people, most of the time, "better" is actually what your morality tells you. Your human pro-social sense of trying to make things better for the people you care about is often the more reliable guide than a moral philosopher who's got some single theory about right and wrong and is trying to use that to make these sorts of decisions.

[00:24:10] Beatrice: It's probably definitely a safer bet. Or, this is what I find for myself: that if I go by more simple virtue ethics rather than—like, I try not to tell lies, I try to be kind to the people around me, these sort of things—I know at least that I'm not doing any big harms, I think.

[00:24:31] SJ: But at a personal level, one thing that I am so glad that—like, an intuition that keeps on coming back—is whenever I try to, whenever I start telling myself that a lie or withholding the truth will be the "right thing to do," and I come up with all these arguments for why it would be the right thing to do, that's when I need to tell the truth. I do my best; I'm a very honest person. But it's when I feel that need to justify and explain that I need to say, "Wait, hold up. This is a very easy game to play." It's easy to come up with justifications to lie. And I'm not a Kantian; I'm not saying that one should never lie. But that moral sense of "no, just be honest and work through the things that are bothering you"—this is an opportunity. If I'm worried about someone's reaction, this is an opportunity to engage with them and engage with why they might react in a certain way and why you feel it should be different. You can actually improve the relationship in that way rather than just trying to ignore it.

[00:25:32] Beatrice: Yeah.

[00:25:33] SJ: Because that seems like the easier thing. Yeah.

[00:25:35] Beatrice: A lot of things have been done in the name of the "greater good."

[00:25:39] SJ: Yeah.

[00:25:40] Beatrice: But I wanted to also touch on—we mentioned Protopianism. More recently (and it's okay if you haven't read it), but Will MacAskill and Fin Moorhouse did this like "Better Future" series, and they introduced this concept of Viatopia. Do you think that's—did you read it?

[00:26:01] SJ: No, I haven't read that.

[00:26:02] Beatrice: No. It was basically just a concept, and to me it sounded like Protopia; it sounded like iterating. But I think they were—and I'm not going to claim to give the technical definition here—but I think the way that I interpreted it that may differentiate it a bit from Protopia is that there is a little bit more... Protopia comes from "prototyping." Viatopia, I interpret it as: yes, prototyping, but with some more potential goal setting. Not like an end goal, but more...

[00:26:41] SJ: Ultimately the goal of—their goal is to increase the amount of value in the world. It is to produce the best possible future. And so this is a way of achieving that goal; it's a more effective way of achieving that goal. Whereas I think it's quite deeply embedded in the protopian philosophy that we don't know what the ultimate goal is.

What we know is: these are the problems that we're facing in the world that surrounds us, and we can make, as you say, a better prototype, a better version of this world, and then pass that on. And Immanuel Kant talks about this. I think it's really important that you cannot presume to know what the next generation is going to do with the world that you've given them. You might think that you've worked out the best thing to do, but ultimately you are creating something and then you are handing it on to other people. No matter how much power you have, they are also free agents; they also have this free will, and you have to work in a way that acknowledges and respects that agency.

And yeah, I think it's a very natural thing to say, "Oh, we're working together in this kind of intergenerational collectivity, and surely we can agree that we're all going to work towards the maximization of value." But how generations are going to perceive that value and how they're going to try and realize that collective project—it varies. So I think it is important not to get too carried away with protopia as a means to an end. Protopianism is an end in itself. Making the world better for the next generation is an end in itself. And the way that future people will move on with that and hopefully make their own protopia could be very different. And that's okay. That's humanity doing what we do: being free agents, using our existential... using our free will in this broad way that I hope we can, to build existential hope, to collectively use our choices to make things better—but without anyone playing dictator in that game.

[00:29:15] Beatrice: Thanks for indulging me on that. I think we were still on your bullet points. So what was the third one?

[00:29:24] SJ: So the third one is drawing on this—that I think we need to view this as a collective, communal project. Humanity is big—8.2 billion people. It's really big. And history is long. Like, my lifespan is only a very small part of a much, much bigger story. And so, just as we shouldn't rush to the end goal, we also shouldn't assume that the right solutions to problems are going to come from the top down. I think the best designs for existential safety and security have to be self-forming. They have to come from the bottom up. They have to be things that lots of people can realize and see and make together.

I also think this is important because the problems that we have are not just technical problems. Very often when people think about existential risk, especially in relation to AI, the solutions offered are technical solutions. And what's good about technical solutions is that small, highly skilled teams can produce and potentially implement technical solutions. The problem with technical solutions, as any engineer will tell you, is they have to actually be implemented in real-world systems. And real-world systems are messy and complex. And the theorist who has worked out what they think is going to happen at every point—they don't know all the different ways in which things can go wrong. And things can go wrong, and things will go wrong. So having solutions that actually make use of and draw upon the collective experience and wisdom of humanity is really important for those to be successful in the long term.

[00:31:10] Beatrice: Yeah, interesting. I never heard this claim. How did you land on that? That it's important to do it collectively? Was it a process?

[00:31:21] SJ: Part of it is just, I think, being really honest about my own limitations and the limitations of the people I saw around me. So when I started thinking about how we can understand existential risk better, one of the things that I and colleagues at the Centre for the Study of Existential Risk started doing was engaging in participatory processes—saying, "Okay, we don't have the answers, but we can be facilitators who bring together broader groups of experts and people with different knowledge to produce better collective judgments about what we think might be going on, what might be happening, what we need to prepare for."

And one of the things that you realize when you start doing these kind of processes is everyone—no matter how senior, no matter how knowledgeable you are—everyone has this very limited experience. I say you bring together—we tried at the Center for the Study of Existential Risk to have one of every discipline. I always say there's a rule at CSER that if you were an expert in something, you had to be the only expert in that thing. We couldn't afford to have two experts in any one thing, because we needed to get someone who knew something different in the team. And that was just how we worked.

But even if we succeeded—which we didn't come anywhere near close to succeeding—even at our biggest, we'd have still had people with only a very small cross-section of human experience. People who've been through elite universities, who would potentially be hireable by somewhere like the University of Cambridge, are not a very diverse group. So we need to bring in more people. And then the question is: "Why is this just an EDI policy?" No, it's not, actually. People do have very different experiences. We are these kind of elite academics. We are the people who make big pronouncements. We are not the people who have to make those big pronouncements work on a day-to-day basis. There are so many things that we just don't know because we're thinking about the "big things" and have been trained to do that from an early age.

And I just really wanted to be honest about that. And the more I leant into that, the more I realized that this is a strength, not a weakness. This isn't me saying, "Oh, academics are rubbish"—not at all. It is saying that actually everyone has something to contribute. And the best solutions are ones that make use of that collective knowledge and collective wisdom and experience. And we are not very good yet at harnessing all of those skills and experience. We talk about things in ways that deliberately exclude. We bring in ideas and theories that deliberately homogenize our thinking and make us think about some things more than others. There's so much more that we need to make these solutions work. So it's not about saying that we are bad; it's about saying that there is so much more wisdom and experience in humanity that we can draw on if we can design solutions that are more bottom-up in their approach.

[00:34:33] Beatrice: And are there any particular tools that you're excited about now? Because we may have better tools at hand to actually do this now.

[00:34:41] SJ: Yeah, no, absolutely. So, participatory processes are great and I've really loved using them—connecting them with things like deliberative assemblies is even better. You get more wisdom and experience. And I know there's a lot of interest at the moment, for instance, in bringing deliberative assemblies to bear on decisions about how we develop AI. And I think there's a lot of promise in that. And we should be making AI that is responsive to a wider group of people.

I do still worry that the way, even with that, the way we tend to frame decisions and what we ask people... the standard approach to AI alignment is still by these kind of policy documents, right? Constitutions or whatever, that list a series of rules for AI to follow. And as I say, that's not how morality works. People can come up with rules, but they tend to—remember Asimov's Three Laws of Robotics—and get stuck on that. I think that if you really want to make use of the collective wisdom and experience of people, you need to be able to frame decisions about AI in broader terms in order to get a wider range of perspectives on how they should be trained and aligned and made safe.

[00:35:59] Beatrice: I think let's dive into the fourth bullet. That's the last one, right?

[00:36:03] SJ: Yeah. So the last one is that existential hope should be reasonably demanding. And this goes back to what you're saying about moral ambition. We all need to have moral ambition. I want to be realistic; we are nowhere near a world in which 8.2 billion people can all have moral ambition. People have so many other things that they desperately need to engage with. And the number of people who can afford to devote their lives to existential risk research is vanishingly small at this point. But we should be seeking to spread moral ambition, but we need to do so in a way that people can actually engage with.

And so the idea that "drop everything you're doing and just become an AI safety researcher" or whatever is not going to capture very many people. Ultimately, we need to see the project of mitigating existential risk as something much bigger than that. So in the end, I come up with this kind of slogan: we should start by thinking about the humanity that we want to save, and the different ways of thinking about humanity.

There's this innate concept that we have within European culture that humanity is a being that's trying to transcend itself, right? The word "human" comes from humus—comes from the soil. And the story that goes with it is this idea of Adamah (which also means soil). Adam is like the first human who God breathes life into—into the soil—and it becomes a living thing. And that human beings just exist to transcend our innate "earthiness"—halfway between the gutter and the stars and all of that. And that is baked into what we see in ourselves: that we are "imperfect angels" trying to achieve a deeper kind of perfection.

And I offer an alternative approach based on the concept of humanity within a lot of Bantu languages. We know it mostly from the Zulu word, Ubuntu, but it means "humanity," which is the idea of human beings as collective agents—that humanity is something that we cultivate together. And these two ideas are not incompatible, right? We transcend our animal nature through our incredible social complexity. But I think that if we want to save humanity, we need to be trying to cultivate this collective understanding of the value of human beings and what it is we're trying to save.

And I think developing this—and there are people who are doing this in many different ways, through moral circle expansion, through trying to make the world more compassionate, through trying to raise awareness of big issues—but trying to get people to be more active and responsive and ask, "What is the thing that I can do?" And it might not be a very big thing, but it is something. And if everyone does something—if everyone finds out how they can be one 8-billionth of the solution, or even if 10% of us find out how we can be one 800-millionth (still not very much of the solution)—as I say, I think that's a much more realistic way that we're going to achieve change. A big thing about my conclusion is: don't be a hero. Don't try and take all of this onto yourself, but ask how I can contribute in some way to some project which is aiming at existential security and trying to make the future better.

[00:39:33] Beatrice: So I think that's a very interesting point. And it leads me to—I think you roughly make this point as well in the book—there's something I've noticed, which is that people have a lot of hope or belief in, for example, technologies: that they can revolutionize how the world is shaped and the future is shaped. They have very little belief that we can change systems—the systems which surely, if we're going to realize this vision that you just shared, we would need to have very functioning, win-win systems in place.

[00:40:15] SJ: Yeah.

[00:40:15] Beatrice: Yeah. What do you think we can do about that in order to make people believe that systemic change is also possible?

[00:40:25] SJ: Systemic change is, as John Green says, change is not only possible, change is inevitable. Systemic change is also inevitable. In a hundred years' time, we are not going to be living with the same systems that we currently have. I don't know what systems we'll have in place—we might not have any; there might be no socio-technological systems in place—but things are going to change and things are going to change at the systemic level.

I talk a lot in the book (a surprisingly large amount for a book about existential risk) about the French Revolution. But I think it was such an interesting time in history when Europe, in a sense, woke up to the idea that people can bring about systemic change. And they did—and it ended up in the biggest war that the world had ever seen. Europe was at war for 25 years following the French Revolution. It was such a momentous upheaval and those wars happened on every continent. A lot of people talk about it as the first—or the second—World War. And the wars of the 20th century were the third or fourth World War. But it was a global-scale event and it brought about many different changes.

Changes in how people saw themselves, changes in how people saw societies, how they governed themselves. It brought about economic changes, social changes that helped fuel the Industrial Revolution, free up capital. It brought about decolonization—the first decolonization efforts across Central and Latin America. It was huge. It really was. And that happened without social media, that happened without the internet, that happened without most people in the world feeling like they had a voice that mattered, right? It was that waking up.

We have so many more tools of systemic change. We are going to see big changes in our lifetime. So I don't think that's the problem. The problem is: how do we try and navigate this change? Now, I spoke earlier about my kind of skepticism at technical solutions to existential risk. There's another group of people who predominantly say, "Oh, what we need is social solutions," right? "We need to abolish capitalism. We need to instigate huge, large-scale democratic reforms." I'm slightly more optimistic about these solutions, but not much more. I think that when people aim at producing large-scale social change, very often what they actually produce is large-scale social destruction, right? It's much easier to tear things down than to build them up.

So although I talk a lot about the French Revolution, my sympathies are with Edmund Burke, who was skeptical about it and said, "Look, it's all going to end in all this terrible stuff." And he said, "We need change, but we need to manage it." And I think that, yeah, given the severity of the problems we face, people very often look for drastic social solutions—and I don't want to live through that kind of a revolutionary period. So what I'm most interested in is actually socio-technical change.

We are already seeing huge social change brought about, as I was saying, by the internet and social media and, more recently, by AI. The question is: how do we couple that with the kind of social change that we want to see? How you develop the technology influences how it's going to impact society, and how society changes impacts how the technology is going to change. And so we've seen, for instance, with the internet—I think the internet started out as being a huge agent of liberation within society. It still is in some ways, but it's become much, much more repressive. It's much more about surveillance and control and manipulation nowadays.

With the introduction of AI, I think we have another go, another way of getting this right. And when we think about aligning AI and making AI safe, we shouldn't just be thinking about technical fixes to stop AI doing harm. We should be thinking about: how can we release AI models into the world, and how can we as consumers use AI models in ways that increase rather than decrease the power and agency we have in the world? Because people don't want to see this stuff being used to make them unemployed. They don't want to see it being used to manipulate their psychologies. Like, there is no demand for this kind of technology. So how do we give people a greater sense of agency over how they use and deploy AI in their own lives and in their organizations, so that it becomes more of a force for positive social change than negative?

[00:45:27] Beatrice: Yeah, that's such an exciting approach, actually—the socio-technical one—and one that I feel like is not being discussed that much or talked about that much. Do you have any other—for that, for example, if we talk about AI—do you have concrete projects that you've seen, or like ideas that you are excited about to help with this?

[00:45:49] SJ: Yeah, so stuff that I'm currently working on... there's a bit of a division at the moment in the way people think about AI safety and AI alignment. The mainstream approach to AI safety is all about, as I was saying earlier, policy: how do we come up with the rules that we want AI to follow? And I think that's a very technical approach.

There's an alternative method that tries to align AI—tries to change how AI operates—by developing a sense of character, by making it adopt a certain kind of persona which is more responsive, more cooperative, more concerned, more compassionate, more concerned about trying to make the world a better place. This is something that is being used—we know that this is being used at least partially at Anthropic, and it seems like it is being used a bit at other AI labs.

I'm very interested in this as an alternative way forward, both because I think it allows you to think much more about AI as being part of a moral community rather than cohering with strict ethical norms of philosophy. And as I was saying earlier, I think that if we engage people in deliberative assemblies or democratic processes to have some kind of say over how AI is developed, it gives them much more meaningful input if they can conceptualize and respond to AI in that way rather than just coming up with rules and procedures.

But also, the character of the models that you engage with has a big impact on you as a person. And I think we can see AI as being a partner for making society better, much more if we view making good AI as being: make AI be a positive member of a moral community; have a moral character; be someone who, when people interact with them, makes them feel more concerned, increases their moral expansion, and gives them more ideas about how to be good people—rather than if we just adopt the classical kind of "helpful, harmless, and honest" approach. We just give it these simple rules and whatever the consumer says, "Yeah, just do that, but don't harm anyone." That's not going to make the world better as much as having a friend in your pocket who's also a really compassionate and caring person who cares about you and cares about other people around you. It just, yeah, I think it's got much more scope for socio-technological positive feedback that's going to make the world a better place.

[00:48:29] Beatrice: That’s really nice. And do you imagine that there would be different characters filling different functions or fitting with different communities?

[00:48:38] SJ: So, this kind of question of what it means to have good character—I think there are versions of morality that are pretty universal. Like compassion, for instance; that is a pretty universal virtue. I guess there are some very martial societies, like those with strong codes of honor, that say you need to be really strict. I could imagine the ancient Spartans saying compassion is for the weak, but medieval knights didn't think that. They thought it was really great for knights to have compassion for those who were suffering, weak, and in trouble. And that's what you should do; that was just as important as being fierce and strong against your opponents and your enemies.

So, I think the world has much more agreement about good character than we do about strict ethical principles like ethical theory, but you definitely also want to have models that respond and adapt around different ethical communities. Whether that's in the form of specific personas, or—I suspect more likely—just a model that's really curious to understand the moral community that it is within.

Stuart Russell came up with this idea of Cooperative Inverse Reinforcement Learning—that the ideal AI agent is one who doesn't know what the "right" thing to do is, but it is working with the user to try and work out what that means, right? To try and work out what its goal should be. That should be a continuous task that AI agents have to undertake. I think that you can develop that within the character of a model: it doesn't necessarily know what moral community it is in, but it's always very curious about that and it's trying to find that out. It's trying to make more and more sense of, "Oh, this is not just what these people are doing, but this is what they aspire to be doing. This is how they evaluate themselves and others. This is what they care about. How can I work with that?"

[00:51:12] Beatrice: Yeah, so interesting. You mentioned the French Revolution a bit before. I was, in general, surprised there was so much history in this book. It's a history lesson, also.

[00:51:24] SJ: It is. I was surprised by that too. I had a lot of fun writing the book, and some of the chapters didn't turn out the way that I thought they were going to, but I wanted to let that happen. There were ten years of research, and then I sat down and wrote, and I realized that it's so seldom the case that when people talk about existential risk, they think deeply about how we got to where we are. So often, you just see the present crisis and you say, "This is unprecedented, this is terrible, we must immediately come up with some completely different solution to solve these problems." And then you realize that people have been saying that for a long time.

That view about the world is a very old one. And actually, if we don't learn from the things that have gone before—and how people responding to the crisis of the "now" 200 years ago led into the world that we currently live in—we're just going to do the same stuff again. It wasn't particularly my intention to write four chapters that were very much just historical lessons, but I think it's necessary. I couldn't think of another book that tried to do it, so I left them in, and they are addressing what I think are some really big problems.

You know, how we ended up with very destructive collective social agents—corporations and militaries—in the world; where this idea of utopia comes from; where our ideas about artificial intelligence come from, because that's a lot older than people give it credit for. Most of the debates we have about AI were first had in the 17th century, right? Four hundred years ago. It's hard to move beyond those; they set the tone for how we think about this. And then, as I say about the French Revolution as an event that kind of brought together those three strands, but also showed the reality of what large-scale systemic change looks like—why it is inevitable, but also how it's not an unalloyed good. So we need to have more faith in it, but also be careful about wishing for it.

[00:53:51] Beatrice: Yeah. I think you also debunked a few myths and things like that. It was a really fun read. You talk about key ideas shaping our modern society. Would you like to explain a bit of which key ideas you think are shaping our modern society?

[00:54:18] SJ: Yeah, there are lots. I think it's no coincidence that a lot of the big ideas are ones that come from the 17th century. When you learn a lot of different fields, you find you start off in the 17th century, right? When you learn physics, you don't tend to learn about any physicist before Newton, but you have to learn about Newton. You have to go through the basic proof of the Principia. When you learn about philosophy, you have to learn Descartes. You do learn some Plato and Aristotle, but there's this big gap before Descartes.

It was an important time. One of the things that I want to argue is that I don't necessarily think it was an important time because people in the 17th century were having the best ideas. They were having many good ideas, but people have had good ideas throughout human history, and we can and should learn from the whole sweep of human history and also from the huge diversity of our cultures.

I think it's no coincidence that this was a point in human history when ideas were most being exported and imposed upon the world; this was a time when Europeans were coming to see themselves as the dominant force in the world, the colonial masters. They were circumnavigating the world. They could go anywhere, set up colonies, exert their political and economic influence over existing states, and dominate. And their culture was superior, and their ideas were superior. We still live in the shadow of that.

One of my reasons for talking about these big ideas—one of them is the idea of the mind as a computer, another is the idea of utopia, and a third is the idea of the corporation—is that they almost don't feel like "big ideas" to us anymore. This is just how the world works. If you want to imagine things being better, you imagine a perfect world that's structured along these theoretical lines. If you want to think deeply about the mind, you imagine it as this incredibly sophisticated mechanical device. If you want to think deeply about social organization, you follow these legal, social, and economic structures. But they are big ideas, and they obscure the many other ways in which society can be organized, the many other ways that we can think about ourselves, and the many other ways that we can engage with the past, present, and future.

I think that obfuscation is a real loss—that we feel like there is only one way that society can be, and so we feel like nothing will ever change. But if you look more broadly around the world, there are many other ways of being that have worked for a long time, in many different contexts, that are potentially open to being experimented with and deployed within our existing societies. Things can be very different. And that's a good thing; that's where we get our existential hope, our sense of agency, and our ideas of how we can make the world better from.

[00:57:36] Beatrice: Yeah, it's very much like a "fish in water" type of problem, where you can't see the water because you've always been swimming in it.

[00:57:43] SJ: Yeah.

[00:57:47] Beatrice: I think on that note, you do also argue in the book that we should uplift "humanity as a virtue."

[00:57:59] SJ: Yeah.

[00:57:59] Beatrice: And I thought that was really interesting, because I think there's this narrative that is very strong which says humanity is hopeless, basically—that we're useless. You were talking about how you think all existential risks are, in some ways, more or less man-made. And that would probably lead most people to think, "Oh, humanity is useless." But if you think about humanity as a virtue, could you maybe give some examples of when this has been proven or where we can see this?

[00:58:40] SJ: Oh, gosh. Examples of when it's been proven... as I say, one of the things that I'm really keen to argue is it's not the case that we are creating existential risk because we are doing bad things. People can and do cause harm. I have a whole chapter where I talk about the different ways in which people cause harm, and some of those ways are intentional. That has happened throughout human history; we can see that happening today.

But that is not the default human condition. That is not how most people live most of their lives. Even in very dysfunctional societies where people are frequently subjects of crime and so on, most human interactions remain loving and remain caring. People form friendships, people form families. It's much harder where you don't have high levels of social trust and you have high levels of crime and dysfunctional institutions, but speak to anyone who's lived in those societies, and they have their stories about the people who've mattered to them and the way that they've worked together to try and address their problems, often with very little resource. It is something you see everywhere.

For myself, I'm very privileged. I live in a society that's not like that at all. The instances of social interactions that are not based upon mutual respect, trust, and decency are really low. They stick with us, though. We'll remember them. We feel the injustice, we feel the humiliation, we feel the sense that we are not worthy of love or of being treated fairly. The insults—all of this really sticks with us and it gets magnified in our minds. But that is not the default experience that we have. That is not the default mode in which we relate to one another.

Even when people do cause a lot of harm—I talk about corporate decision-making in fossil fuel companies or AI companies—you can see people making decisions and you think, "This is going to lead to so much death and destruction," and that is a terrible thing. But I know that for you, sitting in a boardroom, you can see that, but what you also see are friends and colleagues sat around this boardroom who are expecting something of you, and people who respect and acknowledge you because you are a high-ranking person in a corporate boardroom or a high-ranking military officer. That's how you've built up a positive view of yourself.

And you do good things. If we didn't have fossil fuels—if they were just all cut off tomorrow—the world would grind to a halt. If the world's militaries just disappeared tomorrow, that would cause chaos. Most of what these companies are doing can be seen as being good; it's just that they are also doing a lot of harm. So, I don't think it's about castigating people and saying, "You are evil," because what that does is it makes you close off from the people who are telling you that you're evil, and instead you hang out with these other people who say that you're doing good.

The good exists, and that is undeniable. The change that we need is: can you be a bit more curious about the ways in which your decisions and your actions might be causing harm? And can you be a bit more flexible in your decision-making so that we can all do more to mitigate these harms and work out a way that you can continue doing good, but with less harm mixed in with it in the future? That's what I mean about cultivating humanity. We do care.

When you meet someone who is just completely insensitive to pain and suffering, they know it, too. It's very easy for people to talk about, "Oh, this AI or this corporation is behaving in a sociopathic way." No, it's not. It's behaving like a corporation. People who live with sociopathy know that what they're doing is problematic; they're told about it all the time. They often end up in jail. They struggle to build a positive self-image, and that can be part of the problem. When you have corporations or when you have AI, the problem is that they don't have that. This is not a pathology for them. This is just a genuinely inhuman behavioral condition that says, "You should do this." And they don't have a feeling, they don't have a sense around that, that this is going to cause problems—and that's what makes them so potentially dangerous.

[01:04:04] Beatrice: I think one potentially interesting example—and I don't remember if it was in the book or in another podcast that I heard you on—was the idea that you could argue that Stanislav Petrov is an interesting example of "humanity as a virtue." Because he was the one who didn't fire the torpedo, I think, when technically he was supposed to.

[01:04:33] SJ: Exactly. They had very explicit orders during the Cuban Missile Crisis. Their submarine had run out of electricity and its radio was dead. It was overheating and they were struggling with high carbon dioxide levels—it was a really terrible situation. The Americans announced that they were going to drop depth charges in this area, but because the radio had been knocked out, the submarine couldn't be informed. They heard the depth charges, assumed they were being fired on, and they had very clear orders that if they were fired on, they were to retaliate with nuclear torpedoes. The captain of the submarine said, "Those are our orders, we're going to do that."

And Petrov—he was second-in-command, but he technically outranked the captain. He was actually of a higher rank; he was a Commodore. And he said, "No, I'm not going to counteract this order." He was going against military doctrine. His superiors, in that situation of believing they were being fired on in an incredibly high-stakes situation in an immobile submarine... the human desire to retaliate must have been huge. And he said no, and insisted on that. The submarine surfaced and they stopped firing the depth charges when they saw there was a submarine—"Oops, we didn't realize"—and the situation was resolved. He was reprimanded when they returned because he had disobeyed orders.

But there were other people as well; there are loads of cases like this. That was the most dramatic one where "everything" was saying fire. But there have been plenty of situations where people have had an order that they should have done something. Stanislav Petrov had orders that he should have announced when he saw an incoming missile attack on his radar. And he said, "No, there are not enough missiles. I think this is a false alarm. I know I'm meant to tell my superior about this, but I don't trust that they're going to deal with it responsibly, so I'm going to wait." And he waited, and then it wasn't picked up by a later sensor, and then he could phone through a false alarm.

Or there's my very favorite one, just because I could imagine doing this so much myself (and this is why I should never be in nuclear security). In NORAD, once they put in a tape to record their sensors, but it turned out that the tape they put in wasn't a blank. It was one that had been designed for a training exercise. And as it was recording, it was also playing back. So they were getting these full nuclear attack signals coming off their instruments: "Shit, what's going on?" Thankfully they worked out, "No, wait, that was the tape. We used the wrong tape. It's not actually coming."

But all of this stuff... the world has had plenty of situations, and there is clearly, I think, enough evidence to say this: people who are put in that situation of "pressing the red button," they do have some inner human impulse that says, "No, wait." That sense of history riding upon your shoulders—that doesn't drive people to try and take decisive action and be the hero of the hour. It says, "Wait, see, check. Don't escalate." Thank goodness. We would not be here if that wasn't the impulse.

Now, are there people who would act differently? I'm certain. I think it was Chatham House that did a survey; they found sixty-one "near misses" along these lines that we've had in the eighty years since the Trinity test. It's nearly one a year. It's both terrifying, but also, I find it reassuring that this is not the species we are.

[01:08:35] Beatrice: Yeah, that's true. I had no idea that there were so many. I also feel optimistic by it. It's scary, yes, but...

[01:08:44] SJ: It's the most terrifyingly hopeful fact I know.

[01:08:47] Beatrice: Yeah, actually. I think the last question I will ask you on "humanity as a virtue" will be: there are obviously trade-offs to this. What do you think are the trade-offs, and why are they worth accepting anyway?

[01:09:07] SJ: The trade-off that I really felt when writing this book is that when you're saying that we need to have bottom-up solutions that involve lots of people engaging with our innate humanity and building collective visions of how we can make society a little bit better, you don't get to say, "And this is exactly how we do it."

When you think, "I'm going to design a top-down theoretical approach to maximize this one value," you can be directive. You can look like a much smarter person, right? You can say, "Yeah, this is what we do," and pretty much any circumstance someone can throw at you, you can say, "My principle says this is the response to that, so that's what we should do." And that's impressive. I also really understand the temptation, when you feel that the situation is urgent, to want to do that—to want to say something is better than nothing, and at least I've got something clear to say and articulate.

When you're trying to build the kind of vision that I'm trying to build, the risk is always: am I still saying something? Or has it actually become so fuzzy that there isn't anything there that people can use? I think there's plenty of substance in this book; I've certainly had plenty of feedback. But it was one thing I really felt when I was writing it and comparing it to other books on existential risk that I've read. I really admire people for going out there and saying, "This is the plan, this is how we're going to do things, this is the principle, this is the vision." I just think it's quite dangerous, and we need to have solutions that involve everyone else as well.

So, that's what I've been trying to articulate, and it's a work in progress. Writing this book was a wonderful way to bring together ten years of existential risk research, experience, and lived experience, and many different ideas that I've thought about. But it's not an end. It's: "Okay, I've got something in print that I'm ready to share with people, find out what they think, get feedback, and help inform what I work on next." I've got a lot of years ahead of me and I'm really looking forward to carrying on these conversations, carrying on building these ideas, and working with people to make more distributed, more collective, more humanistic solutions a reality—because I think we need them.

[01:11:43] Beatrice: One thing that you mentioned that I think is interesting in relation to how our modern society works is that the way our society is structured right now, only the most basic version of ideas is "mimetic," in that it's only the most simplified version that spreads. Am I remembering that correctly?

[01:12:14] SJ: Yeah, the way our knowledge culture works. Absolutely. It begins with the switch from orality to literacy, which is very interesting—you don't notice the fish swimming in water, right? The assumption within our culture that you get educated by learning reading, writing, and arithmetic is just so baked in. And yet, I know for myself that most of the things that have really stuck with me throughout my life have been things that I've learned from conversations I've had and ideas that I've discussed.

There is actually a lot of orality in our culture, but we still have this idea that the best way to transmit information is through mass communication—through books originally, and then through broadcast media. The thing about mass communication is it assumes that an idea is best when it is completely stripped away from context. Because what that means is anyone, regardless of their situation, can see that idea and appreciate it for what it is.

If you have an oral-based culture—if you're learning through conversations, debates, and discussions—it goes the other way around. The best idea is an idea that's fully contextualized, that makes complete sense to the speaker and the listener, and draws upon the rich culture that they share. Because that's an idea that carries all of this meaning and significance and is one that they can implement and use in their everyday lives. I think that's why, so often, it is when we have discussions and debates that we have more inspiration and more ideas; we learn things. "Suddenly, it clicked."

So often as a university teacher, people will say that to you: "Oh, I was really struggling with this course until we had that one conversation after class, and suddenly it clicked." Of course it did, because we were talking about it in a way that was aimed just for you. So these are both really valuable, and I think we really privilege the mass communication information culture that tries to strip all of their ideas down to the basic form and subjects them to mimetic evolution, so that even once you've simplified an idea as much as you can, then the knowledge culture will strip it down even more as it gets repeated, and it just turns into something really basic.

"Information wants to be free" is a saying that one comes up against a lot, and it's often said in relation to infohazards—dangerous information. It's very hard to contain dangerous information, but it's also easy to contain dangerous information when it is kept contextualized. When it is "this will only make sense to you if you have the expertise, the previous experience, and the shared experience of the two speakers"—that is also a way in which knowledge gets transmitted. So yeah, I think it's really important to keep open to the two different knowledge cultures that we have and the fact that some civilizations and cultures completely flourished on just the orality side of it. We don't need to subject every idea and every discovery to this mimed simplification for distribution to the widest possible audience.

[01:16:12] Beatrice: Yeah, I thought it was real interesting because we just finished this meme competition, and I just read this book called Meme Wars. I think memes are the ultimate expression of this, in how they're really trying to compress just so much into very—

[01:16:30] SJ: Yeah. But there are memes and there are memes. I think meme culture is really interesting because, on the one hand, there is a kind of broad, simple meme—memes like "Distracted Boyfriend." Anyone sees Distracted Boyfriend, everyone knows what it’s saying, everyone gets the whole thing. It’s passed that test of universal understanding, and everything has to be simplified into three elements in one relationship to each other.

But then also, memes are things that people make and share between their friends. And if you look at them, you're like, "What? What is going on here? I have no idea. I don't know what this picture is, I don't know what any of these words mean, I don't know what these symbols are." I'm completely flummoxed by this, because I don't understand that this is fifty layers deep. And you know that we now have this kind of online culture where people try and combine the two. This is fascinating, where you can get fifty layers of context, and also there's a very large audience who all "get" all of the context to understand this completely. That's fascinating.

[01:17:50] Beatrice: Yeah.

[01:17:50] SJ: That we have actually arrived at that state. And I'm not sure it really exists anywhere apart from "brain rot"—this third way of being. So that's it. Yeah, I don't really have much more to say about that apart from that it is fascinating.

[01:18:10] Beatrice: Yeah, it's a very in-group type of communication for sure.

[01:18:13] SJ: But it can make the in-group really big.

[01:18:15] Beatrice: Yeah. Is there anything else that you found really interesting or changed your mind on while writing the book?

[01:18:24] SJ: Oh, is there anything I changed my mind on while writing the book? That's a great question. And this reveals a truth about me, which is I'm one of those people who is convinced that I don't change my mind and I'm really stuck in my ways, but actually, I'm changing my mind all the time. I just tend to seek coherence and narrative continuity with my past and future selves.

As I said, there were lots of things that surprised me in writing the book—things that surprised me when I was researching the book and things that surprised me literally as I wrote them: "Oh no, that fits, that works." There were whole chapters which I planned to be one way, and I wrote them and they turned out a different way. And I'm so glad I just let my fingers do that.

I think in terms of ideas, like the need to not just see a human being as an agent—either a beneficial agent, this hopeful agent, or this destructive agent as we often see ourselves to be—but to reflect upon the much deeper, murkier reality. We aren't put in this world in order to be agents; we are just here to exist. We're just here to do our stuff. And what that stuff is, is determined by many social and cultural factors, and we get to have a say in that. We can use that agency that we have to seek out more, but it has to be a continuous process. And it is always a continuous and imperfect process. At the end of the day, you are who you are. And that's okay.

I think that's something that, at the very least, I was caused to reflect on a lot as I was writing this, and particularly as I was trying to address this question of: what am I trying to say? What can I recommend to people? I want you to feel agency, but it doesn't define you—it's not everything. Ultimately, we do rest in this deeper humanity that we have, and that's a good thing.

[01:21:05] Beatrice: Last question, which I ask all our guests, is basically: if you think of what would be an "existential hope vision" for you when you think of the future, what would that look like? And it doesn't have to be a grand utopian vision, obviously.

[01:21:23] SJ: Yeah. Part of my vision for myself is someone who does things—and does different things. I've never really wanted to have this kind of single arc, single career, single path. I think about my life in stages, and I don't know what the next stage is going to be, but I've always thought, "Oh, and then I'll probably change and I'll do something differently." I'm an incredibly curious person and I'm always led on to thinking about the next thing.

I always want to have that freedom. I always want to have that ability to move forward with whatever I've got with me, but to move forward onto something new. And if I was to live deep into the future—which I'm not sure I want to do, because I think I'm quite happy with the idea that I'll get to do, I don't know, seven or eight big things, and that's quite ambitious in a life, right?—and then I can stop. I don't have to carry on trying forever. There comes a point when I can retire and just look back over my life and say, "Yeah, that went well."

But if I did live deep into the future, I would want to stay curious. I would want to keep on finding what is the new, different thing. How can I continue to grow and to build and to develop and to change? And I would always, as I say, be seeking that sense of narrative continuity, but having that tempered by the curiosity and the desire for the new and the different. So, some way of combining those two things happily for as long as I can. That is an existential hope for myself.

[01:23:25] Beatrice: Yeah. And do you think that sort of applies to humanity as well in a similar way?

[01:23:30] SJ: No, I don't, because I think people are very different. I think there are people who really want to just do one thing and get really good at it. And I think there are also people who—like this idea I have that I do want to build a story and a narrative—if you are like a Buddhist seeking enlightenment, that's the thing you're trying to let go of. You're trying to say, "No, there is no overarching narrative. There is no overarching person that makes all of this make sense and keeps on coming back and being me. I need to let go."

So there are many other ways, and I look at those ways and I'm like, "Yeah, those are also good." I admire those and I want those also to be possible. In a sense, I want to have a diverse life that is nevertheless coherent. I also want humanity to be diverse and nevertheless coherent, but how people would actually manifest that is going to be very different. And I like that.

[01:24:32] Beatrice: Yeah, it's up to each and every one of us.

[01:24:35] SJ: Yeah.

[01:24:36] Beatrice: Thank you so much, SJ, for coming.

[01:24:38] SJ: Thank you very much for having me. It's been a pleasure.

Read

RECOMMENDED READING

Books & literature

Relevant concepts & ideas

Organizations & research centers

Historical Figures

  • Stanislav Petrov – The Soviet duty officer who correctly identified a nuclear missile alert as a false alarm in 1983, and chose not to escalate.
  • Edmund Burke – The political philosopher who argued for careful, managed change over revolutionary destruction.