Podcasts

Michael Nielsen on Hyper-entities, Tools for Thought, and Wise Optimism

Listen on:

about the episode

If you think people who worry about AI are being pessimistic, you might have it backwards.

In this episode, we talk with Michael Nielsen, scientist and writer known for his work on open science, quantum computing, and how our language shapes the way we think. Michael explores what he calls "wise optimism": the idea that genuinely believing in a technology's potential means taking its risks seriously, not dismissing them. 

We also spend a good chunk of the conversation on “hyper-entities”. These are imagined future objects, like the Internet before the 1990s or AGI now, that shape present decisions – what gets funded, who coordinates with whom, and what feels possible. We recently published a report where we systematically looked for hyper-entities and spotlighted our favorites: https://www.existentialhope.com/hyper-entities

Our conversation also covers:

  • How kindness spread through civilization like a technology, and what that tells us about the values we might want to instill in AI  
  • Why some of the most important scientific discoveries happened by accident 
  • Why even the most abstract and "useless" ideas in science tend to end up shaping the real world, both positively and negatively
  • How the tools we use to think (from language to mathematical notation to software) shape what we're able to imagine

About Xhope scenario

Xhope scenario

No items found.

Transcript

[00:00:00] Michael: A lot of what passes for optimism in tech seems to me like a foolish optimism, where it's: no, we're not going to listen, we're not going to think at all about the problems that these AGIs could cause. In fact, it's actually quite pessimistic — it relies on you believing that they're not going to be particularly capable.

If you look at the people who believe that they're going to be the most capable — they're the most optimistic about capabilities — they are the people who are actually the most worried. And I think that the wisely optimistic response is to ask: what can we do about it?

[00:00:31] Beatrice: Okay, so today's guest is Michael Nielsen. Michael is a scientist and a writer, I think maybe best known for his work on open science, quantum computing, and tools for thought. He's been a leading voice in, I think, re-imagining how science is done in this age. And Michael, you have this really broad background and scope of things that you've worked on, obviously, and I do love reading your blog.

It's a very broad range of writing, and I think in this conversation what would be really interesting is exploring and connecting much of all these various topics that you've written on to this idea of existential hope and just positive futures in general. So maybe we just dive in, and I think start with one of the central concepts that — when I found it in your work — I thought was just a really interesting way of putting it, which is this concept of a hyper-entity. Could you maybe explain how you came up with that concept and dive into what it is a little? Like, it's not just a medium or something, for example.

[00:01:29] Michael: Sure. Actually, it's funny — it's a little bit embarrassing.

So I wrote this long piece, 40,000 words, about how to be optimistic in the face of artificial superintelligence. And that's a very heartfelt, very serious piece from my point of view — it really mattered to me. But there is this little rant in the middle of it about these hyper-entities, and it really was just expressing sort of irritation. And I've been a little bit surprised — shocked, even — a lot of people have told me that, actually, that's the piece of the essay that they responded to. It's this kind of an aside from my point of view, but it's really riffing on some older notions, which I didn't feel quite were the right term. I just wanted a term for an imagined future object. Some sort of hypothetical future object. So artificial superintelligence, or quantum computing, or in the past you might have had things like universal programmable computers or heavier-than-air flying machines. These were examples from the past where somebody has imagined a future entity, and that then served as essentially a design goal. And I'm particularly interested in the question — or the aspect that I'm interested in — is in some sense the extent to which imagination was required to do the sort of design work. It's not — I don't know — a notion like a universal programmable computer seems very obvious today because our culture carries it. But at the time it was a shock and required enormous imagination and enormous understanding to come up with. And those are the connotations that I was particularly interested in carrying. Oh, actually a good one — for the Foresight Institute, which is associated with many, but I think in particular one of the founders of the Foresight Institute, Eric Drexler, has this notion of a molecular assembler.

And that's a great example of something that required a lot of depth of understanding and imagination to come up with, but after the fact seems pretty obvious. It's not just Eric who did that, but people like Heinlein and Feynman are sometimes given sort of partial credit. But together, this group of people had this very deep insight, and then it serves as an orienting vision. So it's a very long answer, but it's not a concept.

[00:03:38] Beatrice: I think, yeah, I think the only thing that I would've heard someone say similarly is like a "future artifact" or something — almost. That's something that when we've done workshops on world building and these sorts of things, it seems like the term that most... Sure.

[00:03:53] Michael: I guess there are a couple of other terms. Bruce Sterling has this nice term "spime," which he uses to mean something quite similar. It's got a little bit more — it's an object which has a lot of knowledge about itself. Nick Land has this term "hyperstition," which is an alternative to the term "superstition." It's really a short phrase for "self-fulfilling prophecy" — something which becomes so attractive to existing institutions that it then forces itself into existence. But those terms are slightly different in the sense of the connotations, or the particular focus there. Although they're talking about future entities, they're not really talking about the particular actions that you do with them.

I'm interested in the way in which these kinds of entities tend to carry new verbs with them. So the idea of a programmable computer — the idea that you specify a set of instructions — and a molecular assembler will be able to build any type of material object within, there are going to be some parameters, but very broad. So you have these new kinds of verbs, or affordances in the design terminology. So it's very much a design point of view.

[00:05:02] Beatrice: What was the first term you said there? "Spime"?

[00:05:05] Michael: "Spime."

[00:05:06] Beatrice: "Spime."

[00:05:06] Michael: That's from the science fiction author Bruce Sterling. Yeah. Actually, in fact it wasn't in a science fiction book. He wrote it in — what was it? I can't remember. It's like a long essay about really the design of the Internet of Things — that was the context. It's quite a nice term, but it doesn't seem to have really caught on. I've heard a few people use it, but yeah.

[00:05:27] Beatrice: Not like hyper-entity, apparently. That's the one that — I think, if I remember correctly from your post — you were also suggesting that maybe Silicon Valley tends to undervalue hyper-entities. Yeah. Do you want to...

[00:05:42] Michael: Sure. Undervalue — not exactly. They rely completely on there being a supply of hyper-entities. The way the economy of Silicon Valley works, there's no way of building a moat around it once somebody's had the idea. It's very easy then, typically, for it to be copied and for other people to act on it. So typically it's coming from other parts of the world: academia and art, and the penumbra around them. I sometimes talk about para-academia. Those tend to be places where these ideas come from. Actually, the examples I gave before — molecular assemblers, and artificial superintelligence, and universal computers — they all come out of academia or the little area around it. There are examples like virtual reality, which actually comes much more out of the artistic community.

What's going on there is that the economy — so to speak — in those cases is a reputation economy, which is built around citation or attribution of ideas. And so somebody like David Deutsch can really build a reputation and a career around an idea like quantum computing. But the economy in Silicon Valley works differently — it's built around equity and around ownership of the means of production. And you have these two different systems that are coupled in some way. The ideas coming out of academia are often used as almost a feedstock for Silicon Valley, but it is, I think, somewhat separate.

The way I tend to think of it is that often in Silicon Valley, ideas are very instrumental — useful and understandable — but they're done in service of whatever the product and whatever the market is. Whereas in academia, and also to some extent in a slightly different way in the artistic world, the ideas and the understanding are fundamental — that's the object, that's what you're actually trying to do. And your ability to execute, to build the apparatus or whatever it is, that's a secondary thing that's instrumental to the fundamental goal, which is to obtain that understanding, to obtain the deep new ideas.

So you have these two — as I think of them — two coupled economies, fairly weakly coupled. They both have their own reasons for existing, and I don't want to get into any — sometimes people will say, "oh, you're making a judgment." My personality is such that I'm happier in the sort of the artistic, scientific realm. But of course the world is made up of things which come out of the other realm. So they're both very important.

[00:08:05] Beatrice: Yeah. We had Ed Finn on, who was the editor of the Hieroglyph book that came out, where they connected scientists with sci-fi authors to try to come up with scientific artifacts — future artifacts. Do you see sci-fi as a potential tool for generating hyper-entities? Or do you think that it's maybe too cliché?

[00:08:30] Michael: Historically, yeah. People love to point to the example of Arthur C. Clarke and geosynchronous satellites, which actually wasn't his idea, although he popularized it. And there are a few examples. I constantly find Vernor Vinge — the science fiction author — really remarkably prescient in how he viewed the future. I think I actually see almost more connection to design than to science fiction. This idea of developing new classes of object with new types of action, new types of interface — I think of it as more a meld of scientific understanding. And the fact is, nature is so much more imaginative than we are. And so a lot of the most remarkable things that have been discovered — you think, I don't know, general relativity, or superconductivity, or ideas like this, the fact that light is an excitation of the electromagnetic field — these are mind-boggling facts, just utterly shocking. So much more interesting and surprising than anything a human being has ever imagined de novo. So in that sense, I think you actually need that marriage of very deep insight into how the world works with, again, a design point of view. And science fiction writers to some extent have that. So of course do designers, but science fiction writers themselves, I think, often don't have quite enough depth of insight into the world. They're stuck recycling ideas that they've seen — often very effectively, and they can see all sorts of interesting consequences — but it's not quite the same as invention.

[00:10:00] Beatrice: Yeah. Do you think that there should be more of a conscious effort, for example from Silicon Valley, in trying to design hyper-entities, or come up with new ones to generate excitement or a direction?

[00:10:13] Michael: Really? They're doing what they do. I don't know. I think about — actually a good example — Palmer Luckey, the founder of Oculus, was in, I think it was at USC, a basic sort of science lab that was just doing experimentation with very simple prototype VR systems. And so that's, in my opinion, an example of the supply chain working reasonably well. You've got this fundamental work being done, which in its own way doesn't need any justification, but in practice it does feed in. There's always this sort of interesting friction between the two worlds because they have two such different fundamental objectives. But I don't think Silicon Valley particularly needs to change what it's doing there.

There is this interesting question about public goods production, which is that the nature of ideas and understanding is that they most naturally want to be public goods — that's in many ways best for our society. It frustrates me a little to imagine what's in the internal mailing lists of companies like Google and Facebook and Microsoft, and actually for that matter now companies like OpenAI and Anthropic. There must be just an astounding amount of understanding hidden there that would be tremendously valuable if it was made public, but is going to be forever lost. I'll bet that most of what we know about distributed systems is basically hidden inside old Google mailing lists. It's not in the company's interest necessarily to be releasing that. So that's one sort of point of frustration. But I think it's intrinsic to the situation, actually.

The companies do have a lot of vested interest in keeping that stuff private, unfortunately. I don't know how to solve that design problem.

[00:11:57] Beatrice: Yeah. I think on the Palmer Luckey point, he — I heard him say that he got all his best ideas for weapons — because he works on that — from sci-fi, which is interesting, and maybe not the way I was hoping people would get excited from reading sci-fi. But yeah, that's why I feel like with the hyper-entities term, there is something in terms of — if we actually want to think about how we can steer towards not just a future, but a better future — there seems to be something in terms of: we should maybe put more effort into how we design the hyper-entities that actually end up having an impact. Obviously it's a very complex and random thing, maybe, what becomes the hyper-entity.

[00:12:41] Michael: Yeah, it is a bit strange. And this is actually connected to Nick Land's point about hyperstition — the extent to which different things become popular, or become seen as targets, in some sense. The difference between now and 2020 in terms of the possibility of AGI is not that large, but the difference in belief is absolutely enormous.

And I think about, say, pick a parallel world — I don't know, crayons, say. You imagine that the same degree of belief and capital had been infused there. What would that field look like now? It would look probably very different. You still have people working on it, but just not at the same scale. You don't have thousands of people coming into — I don't know — Hayes Valley or wherever, trying to work on this. So that degree of belief and self-fulfilling prophecy is actually pretty important. And it seems weird — it's a little bit random. Certainly it's very much influenced by single actors. If tomorrow somebody comes out and announces that they don't think that OpenAI should have a down round — some very important investor who participated in prior rounds — that would have a very large impact. And it would potentially just come down to decisions by a few people.

I don't want this to happen, but if Taiwan was invaded — and again, that decision can just be made by a small handful of people — that would also have an enormous impact on this degree of belief. So it's funny how that works.

[00:14:16] Beatrice: Yeah. It's hard to steer, or hard to design.

[00:14:21] Michael: Certainly, it does seem that it's possible to steer, but it's just so dependent on contingent actions. It's some interesting combination of power and truth — power matters a lot in the short term; truth matters, I think, more over the long term. If AGI is not going to happen through anything remotely related to LLMs, then you can pour as many trillions of dollars into it and it doesn't matter. That's just a fact about the world. I'm not saying it's true — but I'm saying that's a potential way the world could be. You can make two plus two equal five for a while with enough money, so to speak, but over the long term, reality...

[00:15:01] Beatrice: ...wins. That's true — reality wins. Which — my last question on the hyper-entities point, I think — would be: because obviously I think AGI is like the hyper-entity of the moment, at least in our sphere maybe. But are there any other ones that you are personally excited about? Or that, if you think about what would bring potentially the most value to the world, what would that be?

[00:15:24] Michael: It's interesting you say that it's "of the moment." In some sense renewable energy is actually still more important. Fossil fuels are roughly a $3 trillion industry — much, much larger than what is conventionally called tech. And so the ability to replace that by something else is in some sense more important. And the idea of cheap photovoltaic power and cheap batteries remains this kind of orienting vision — they have been for decades, although I think probably a lot of people didn't really — some people still don't believe in those. Actually, just to go back — can I get you to repeat the exact phrasing of the question?

[00:16:04] Beatrice: Oh, I think the question is just: are there any other hyper-entities that you think would be most valuable right now?

[00:16:13] Michael: Certainly I am very interested in and excited by design mechanisms for solving public goods problems and collective action problems. So ideas like assurance contracts, Alex Tabarrok's idea of dominant assurance contracts, the Vickrey-Clarke-Groves mechanism for which Glen Weyl and Zoë Hitzig and Vitalik have been working on with quadratic funding — these sorts of ideas. They're very interesting. If you can solve the public goods problem or collective action problems, that's just totally transformative for civilization.

It's probably related to — it's not the same as — these centuries-old dreams of global governance working, and we've never really figured out how to do that. We've made some progress — things like the nuclear non-proliferation treaty and the Vienna and Montreal protocols — very important progress that seems somehow related. But gosh, our governance mechanisms seem primitive.

[00:17:06] Beatrice: Actually, that's a really interesting point because — with the Existential Hope program, I've done a lot of workshops and stuff on what future people want to see, and also what do you think is the biggest blocker — and I think a really common theme is just human nature, or these sorts of things. And yeah.

[00:17:23] Michael: So last year I read Tom Holland's book, Dominion, which you can describe in many ways, but to some extent it's a history of kindness — a history of charity. And he's particularly interested in the question of what influence did the Christian tradition have on our modern notions of what it means to be a good person? And he claims, I think fairly plausibly, that in fact it really contributed a lot to people valuing kindness and valuing charity. Obviously people have always — people selfishly value kindness; we like it when other people are kind to us — but they don't necessarily hold it up as one of the chief virtues.

And he will claim that ideas like "turn the other cheek" and "love thy neighbor as thyself," which come out of the Christian Gospels — that one of the most important impacts of Christianity has been to really amplify those ideas, not just in the Christian world, but in fact in the secular world. He traces how it really incredibly influenced the Enlightenment, and also other cultures. So you can go to many very non-Christian countries and they actually hold views about kindness and charity that Holland claims can be traced back to the Gospels. So I think that kind of thing — I don't know, describing it as a social technology is a bit silly. What's the right way of thinking about it? Something like — it's very interesting the extent to which those notions are constructed out of stories, constructed out of myth, and transformative for civilization. I don't know that you can get things like human rights — I'm reading about the history of human rights at the moment — I don't know that you can get things like human rights, and the modern suffragette movement, and the Civil Rights Act, and things like that, without those.

[00:19:00] Beatrice: Yeah, I feel like you can almost think about them as social technologies. It's, I think, especially interesting now — or it becomes very evident in terms of: this is what we want to instill in our AIs now. Or we want to make sure that this is...

[00:19:14] Michael: Yeah.

[00:19:16] Beatrice: ...we have takeoff with these values, not other values.

[00:19:20] Michael: Yeah.

[00:19:22] Beatrice: So I think a lot of your writing is about this. You are obviously very excited about technology, which — at Foresight we're obviously very excited about technology and like that we could, if you read Drexler's Engines of Creation back in the eighties, it's like all these exciting things that we could do. But then there's also the point that all these very scary things could also be achieved with these technologies. And you wrote the post on — how to be a wise optimist, is the term, yeah — about technology. Could you maybe help us unpack a bit what that means to you, and also why it's important?

[00:20:03] Michael: Sure. I suppose I was just amused, almost, by the growing use of the term "optimism" in discussions about technology and the future of technology, and what seemed like a very bad misuse. If a patient goes to see their doctor and is diagnosed with cancer, the optimistic path isn't to ignore it and to pretend it's not happening and to continue on about your life, just hoping that somehow spontaneously it will go into remission. The optimistic path is to really take it on board — and not to get depressed; that would be the path of pessimism — but to take it on board and to ask: what can I do about this situation?

And so a lot of what passes for optimism in tech seems to me like a foolish optimism, where it's: no, we're not going to listen, we're not going to think at all about the problems that these AGIs could cause. In fact, it's even — it's actually quite pessimistic. It relies on you believing that they're not going to be particularly capable. If you look at the people who believe that they're going to be the most capable, they're the most optimistic about capabilities — they're the people who are actually the most worried. So the actual optimists in some sense about capabilities are very worried. They're the people who are facing up to the fact that there is this sort of very difficult diagnosis.

And I think that the optimistic response — the wisely optimistic response — is to ask: what can we do about it? That doesn't mean you're necessarily going to find an answer. It may be terminal. But at least you can engage seriously with the actual state of affairs. Do your best to find positive things to do. It doesn't mean that you need a solution — but even just the ability to start working on tiny little things is...

[00:21:48] Beatrice: That's actually really true — that the ones that are the most optimistic about these technologies are also the most scared. It's like the transhumanist point, or — I feel like that's definitely an observation that I've made — the people that almost started thinking about all this existential risk stuff, like Nick Bostrom or others, they were transhumanists that got really excited about the future. And then they were like: oh, wait, what if all these things get in the way...

[00:22:19] Michael: ...of making that future happen?

[00:22:20] Beatrice: And so I feel like that's how...

[00:22:23] Michael: Yeah, we ended up here.

[00:22:24] Beatrice: Yeah. I think that it's really interesting also because there's the point that you're almost shooting yourself in the foot if you don't take into consideration the potential downsides of these technologies as well, because you probably might get a lot of opposing views against you, basically.

[00:22:43] Michael: Always in history — the first jet airliners, a lot of them crashed. That problem needed to be solved. In fact, de Havilland went out of business because of it, with their Comet. And there are so many examples like that. The early refrigerators leaked ammonia gas and killed people — they actually needed to replace it. They replaced it, funnily enough, with chlorofluorocarbons, which was a big step up. It didn't immediately kill people, but of course it did have this other problem. And that pattern of actually being honest about the problems and figuring out how to fix them is foundational for existential hope.

[00:23:19] Beatrice: Yeah. I think that — I think it was in this same post — you also write about the fact that advances in science and these things, that's what brings us closer to the truth, or we know more about the world, but truth is like dual-use. Is that still your take? And does that mean we want to get as close to the truth as possible? Or what? Because I think, for example, if you think about the existential hope point — when I ask people what the most existentially hopeful future they can imagine is — it's often "knowing or learning as much about the universe as possible" that's one of the most common things people will say. But yeah, what's your take on that as our ultimate goal?

[00:23:57] Michael: I don't know. That seems a little too Pollyannaish to me. Certainly the way I look at it — a good example: I spent much of my career as a quantum physicist, and there's this astounding set of ideas from about 1900 to 1925 or 1926 that were developed, which represent the most fundamental things about the way the universe works. And it's very important then for later things, if we want to understand proteins, or DNA, or modern semiconductors, and so many other areas of modern material science. But it's also part of what led to the understanding of nuclear physics that resulted in nuclear weaponry. And you don't really get that à la carte, unfortunately — it's all or nothing. Once you've got quantum mechanics, you're going to get all of those things. And this seems to be just an empirical observation. I can't really prove it, but so often any deep understanding of reality ends up having both many positive uses and many negative uses.

I think there's something funny going on where I don't really understand quite why this is — this sort of intrinsically dual-use nature of deep understanding. But a really nice example: there's this famous declaration by the mathematician Hardy that he couldn't think of anything more useless and more beautiful than number theory, which is much of what he'd spent his time working on. And then, not that long after his death, GCHQ and the NSA discovered that number theory could be used to develop cryptographic systems that have all these interesting military and other applications. He was wrong — you took even the most apparently useless set of ideas, which seem very beautiful but very disconnected from human reality, and it turns out they're actually very connected to human reality. Gosh, much of the reason for the NSA — and whatnot — funding of quantum computing is because they want to be able to break these codes based on funny facts about the factoring of large numbers into their prime factors. Hardy — I don't know what he would've made of it. But again, it's just these very deep facts that somehow show up in very mundane concerns, in very important ways, and sometimes in very dangerous ways.

[00:26:16] Beatrice: It's definitely dangerous.

[00:26:20] Michael: It's pretty strange, actually. You see this so much. There are, I suppose, thousands of examples like this. Every once in a while I'll see the Riemann zeta function turn up in an unusual place, or things like this — it's just, these things that seem so obscure and so disconnected, and they turn out eventually to be very important in the everyday world.

[00:26:41] Beatrice: I guess there is something very... it's easy to get excited about that, or it is nice when you're able to make connections and connect dots as a human. And expanding — maybe the Star Trek vision, for example — seems like the most exciting vision that I feel like anyone has been able to come up with so far, at least, that most people I've talked to can get behind.

[00:27:04] Michael: Mm-hmm.

[00:27:05] Beatrice: And that's so much about just continuing to discover the universe and so on.

[00:27:09] Michael: I don't know. I haven't watched Star Trek — I've watched probably a tiny handful of episodes. I think only from the original series have I watched a single complete episode. Something that's funny about a lot of these visions is they're often a bit strange from an emotional or artistic point of view. They often don't — I made this point before about the spread of kindness as something to be valued and as something that then eventually gets built into governance. And I think you often don't see so much of that kind of thinking going into visions of the future. Like, what other fundamental emotional stances will people create and start to spread?

I don't have an answer to that question. I'm just pointing out that a notion like "love your neighbor as yourself" is a shockingly imaginative idea to have, and then to spread and evangelize. And just wondering what other ideas like that — which have to do with this sort of basic stance towards the other — could emerge. Actually, some modern ideas about veganism, and who should be treated as a moral patient, and these kinds of questions — those are obviously very interesting and whatnot. But I think notions around identity are likely to fracture an awful lot in coming years. They are already fracturing in interesting ways in human beings, but they're going to fracture a lot more.

The ability to merge and refactor and change and distill out and do all these kinds of things with artificial minds — there's something very essential about the boundary around a human being, but those boundaries are just going to be utterly changed. It's going to be very hard to say what it means for there to be a boundary around them. I never quite know, when using Claude or whatever, am I using the same model as I was using last week? To what extent are they actually slightly tweaking it all the time? And of course there is that funny feeling I think many of us get, where you keep starting new instances and it's like you're starting a relationship again and again and again with this thing that has some human characteristics. But it's a very strange experience — very human in some ways, but also a very inhuman experience, this reset on identity over and over again. The Groundhog Day of personalities.

[00:29:35] Beatrice: Yeah. I think on the values point, I think that's definitely one of the most underexplored areas, especially in the current context that we're in. And I think — with Foresight, you very much meet with people that are very techno-optimist, and technology is seen as a potential tool for a lot of these things. And I do think that there are potentially exciting applications of just moral circle expansion — also a lot of challenges obviously — but through new ways of being able to communicate, like neurotechnologies and these sorts of things, which is interesting. And maybe being able to communicate with someone on just a whole different level of granularity or nuance could be like a new moral revolution or something like that — would be the best possible outcome.

[00:30:23] Michael: It certainly seems like it'll be — I'm not sure it'll necessarily be best, but it'll certainly be an expansion and a change in really interesting and challenging ways.

[00:30:32] Beatrice: Yeah. I think that unless you have anything else to say on that, I'll jump to another point, which is a bit like the existential hope...

[00:30:41] Michael: Actually, can I just mention one thing? When this kind of point is made, people will very often name David Pearce's Hedonistic Imperative and these sorts of ideas. And I find it interesting that they always name the same example. That's often, when you notice that happening in any domain, an indication that not much is really going on over here except that — which is very interesting, but doesn't indicate to me that it's been well explored. It's a little bit underexplored.

[00:31:07] Beatrice: Yeah, it's interesting — is it underexplored, or is there nothing else there? But I do feel like there is something there.

[00:31:13] Michael: There's no way there's nothing else there. You think about even just the way in which capitalism has, for example, modulated our values in so many different ways. And so any changes to that are going to modulate our values in very interesting ways. There's just too much moral change over the last — gosh — I read a little New York Times piece from about 1905, which recorded that — I've forgotten the name — Ota Benga, I think it was — a man had been brought from Africa and was now being housed in the Bronx Zoo. And the article, it's not exactly approving, but it's not exactly shockingly disapproving either. And can you imagine that headline now? No, it's just impossible to imagine.

[00:32:10] Beatrice: Not even on the internet.

[00:32:12] Michael: No. It seems just ludicrous. And yet this was a hundred years ago. So those differences — it's funny how slow moral progress seems, and how stop-start. And yet, as soon as you go over a hundred years — I know I'm using the term "progress" in a kind of Whig view of history, but there's definitely change, really big changes. And people like Martin Luther King, or Gandhi, or Mary Wollstonecraft, make such enormous differences in the way we think.

[00:32:39] Beatrice: Yeah, I was thinking about this just because I was reading a book about women working in a hotel a hundred years ago, and I was like, "we get to work" — because that was the exciting thing then. And yeah, we definitely — when people mention only one example also...

[00:32:56] Michael: Yeah.

[00:32:56] Beatrice: ...it is interesting. I think with futurism in general, I feel like this has been the case for quite a while, or at least since I became interested in it. And when I've talked to people, like for example Christine, our co-founder...

[00:33:09] Michael: Yeah.

[00:33:10] Beatrice: She's been around with Drexler since the eighties. And it's the same things being discussed. Obviously things feel more — for example, AI feels like it's actually coming now.

[00:33:23] Michael: Yeah.

[00:33:24] Beatrice: And she said that it feels different also, compared to — because they were obviously talking about nanotech in the eighties. And she said that this, we can see the progress just much more clearly now. But it is interesting, I think, with this — that's why I think that people connect to the hyper-entities idea so much, because the vision for the future is a little bit outdated. I feel like some people can get behind it and the Silicon Valley mafia doesn't really mind that type of vision of the future. But it doesn't feel like it speaks very broadly to people, on a more global scale. And I think that even if you prompt your LLM to create a vision of 2045, it's going to be silver clothing and white houses — it's just: oh, because that's how people imagined the future in the sixties or something like that.

[00:34:18] Michael: It's something — I suppose I think this is something where a lot of the best science fiction writers are doing really interesting things. I love C.J. Cherryh, for example — she has just beautiful ways of thinking about identity. Aliette has a really nice review. It's a very short review of Cherryh. It just points out that most people think of an identity crisis as being "am I in the right job?" — Cherryh's characters, their identity crises are about "am I in the right species? Am I sentient?" — these kinds of questions. And it's a good description. She's very good at thinking in that kind of way and just drawing it out and expanding your own consciousness of what's possible.

It's also something I love about some of Ted Chiang's writing, which tends to be much more near-future. But again, he's a person that has this very expansive notion of what being a human being is about, or what being a sentience is about. So it tends to be a bit more interesting than just being about technology — that funny way of putting it. I suppose I think of technology as the interface between humanity and the universe. So I'm putting a lot of focus on humanity there and not so much on the universe. But all three of those things are very interesting. I suppose maybe as a writer you maybe need to focus a little bit on...

[00:35:44] Beatrice: Yeah. I wasn't — I'm not familiar with this writer. I feel like I have to...

[00:35:48] Michael: Cherryh.

[00:35:49] Beatrice: Yeah.

[00:35:50] Michael: Oh yeah, she's won all the awards. And her book Cyteen is, I think, probably her best-known book. I've probably only read maybe five of her books or something, but Cyteen is one of my favorite novels. It's very slow to get started — I almost gave up several times in the first hundred or hundred and fifty pages — and I've probably reread it half a dozen times now. It's a very interesting exploration of what it means to clone an identity, and it explores that through multiple instances in very different ways, quite remarkably insightfully.

Actually, her background is really interesting. She was, I believe, trained as an archaeologist, and it really gives her — she's not a technologist; she has this deep tradition instead, of how people enter the world. And there's also something very interesting — just in the aesthetic, or more than the aesthetic — something very interesting going on that I don't know enough about archaeology to understand, but you can feel it. It's like this person has a depth to them that is very interesting.

[00:36:56] Beatrice: Yeah, that's interesting. It must be like a very — she knows what it's like to explore past civilizations at a very physical level.

[00:37:05] Michael: Yeah, I guess so. I don't know how much fieldwork or whatever she did. In fact, I'm not even a hundred percent sure where I got that information that she was an archaeologist — or that she was trained as an archaeologist. But it is very consonant with what it feels like to...

[00:37:24] Beatrice: ...to read her. We'll link to it on this episode so people can explore it — myself included.

Yeah, I think that — actually, one of the — I'm going to move us on to the next pleasure, also, but just while we're on this: I think my favorite sci-fi TV right now is the Murderbot series, which is available on Apple TV. Now it sounds like I'm advertising them, but I was just really excited to see something so everyday-light and relatable in sci-fi. Yeah, it's — I recommend it for anyone. Yeah.

I think what I wanted to also make sure we have time to talk about is just — I guess it's the point that relates to what we've already touched on, I think. But I know that you wrote about the fact that, for example, breakthroughs like language don't necessarily come from goal-driven processes. So it's a little bit of what we talked about before. And so if we're trying to think — we want to invest very wisely for the most optimistic or the most positive future — do you think that we should try to engineer for a specific outcome? Or are there any underlying systems instead that we should focus on, like tools or institutions that will open up the possibility for positive surprises? So is it these very concrete ideas that we should aim for, or is it more like creating a fertile ground of discovery that we should invest in more?

[00:38:47] Michael: I think you need to do both those things. My personality is to prefer the more exploratory approach. One of my favorite facts is: Charles Darwin didn't go on the Beagle to do biology. He went on it to do geology — he thought he was going to be doing geology, with a little bit of biology on the side. Talk about undirected exploration paying off. And people like Newton didn't have a goal of inventing modern science. He couldn't have — he discovered the goal. Einstein couldn't have had — he didn't write a grant application to change our notions of space, time, energy, and mass. He was just exploring, and those are things that he discovered after the fact. So that's my personal prejudice — in those directions.

I actually love the language example that you bring up — nobody, again, that wasn't the result of a grant application and a big push by the NIH or NSF to do that. It's something that emerged. But by the same token, I think you were pointing — in some of the background materials you sent me before — at things like the Apollo program as a really interesting kind of unifying vision. And of course, in a really interesting way, your question is about the value of coordination versus exploration. If you're going to coordinate, goals are very useful for coordinated attacks on problems. You build LIGO this way, you do the Apollo program this way, you build the LHC this way. And exploration is more something which is done by individuals or small groups.

Part of the reason I maybe feel passionate about defending exploration is my sense that the current time in history is a little bit anomalous in that the tendency of bureaucrats and the tendency of leaders is to prefer a goal-oriented approach — a very natural thing to want to do. You can make the big announcement: "we're going to cure cancer" and so on. And so you can direct a lot of money in that way. Whereas this other approach — where you don't actually know what it is that is going to be found — is harder to defend. It's just so much more hunch-driven, and in particular in very hard-to-articulate ways. So I feel like it needs a bit more of a constituency.

It is a bit funny that in fact our society has adopted that. Bush wrote this famous memo in the United States, which led to the foundation of the National Science Foundation in the United States, and a lot of copies in many other countries. But one of the things that memo did — I believe it's the first place that the term "basic science" was ever used — and that was a coinage really as part of a political argument in favor of this kind of basic, undirected, exploratory research. And he won that argument really comprehensively, which is, I think, why there is as much support for basic exploration as there is now. But the natural pendulum swings in the other direction. The high-modernist approach is to map out where you're going to go.

My friends Adam Marblestone and Anastasia Gamick are running Convergent Research, which builds these focused research organizations. They're lovely, and I am very glad they're doing that. But I don't want a future in which everything's an FRO either — I think that would be bad. I think they're also very aware of that. But it's hard — like, what's self-limiting or self-regulating? What sets the scale? I don't actually know. Certainly that kind of model is very attractive to political...

[00:42:34] Beatrice: Yeah, I think as a funder it's definitely very scary to fund the exploration, because you may end up with nothing.

[00:42:43] Michael: In fact it will end up with nothing almost all the time. Although I think if you look at historic examples, how much future work is justified by Max Planck inventing quantum mechanics, or Albert Einstein inventing general relativity, or Darwin and so on, or Marie Curie and radioactivity — all these kinds of things. Deserves some — yeah, it's really this point, I suppose David Deutsch makes it quite nicely. You can only plan, you can only set goals, if you know what's out there. And yet, by definition, we actually haven't seen most of the universe — most of the parameter regimes are just still completely unexplored. And so you can't really plan well in those places. And yet historically, they've been the most exciting places to go.

[00:43:40] Beatrice: Yeah. I think I want to explore something that touches on this — which is, I think to sum up that point, it feels like it's pretty clear that both are needed. We need the exploration, which probably takes some people with the guts to fund that, and we need the coordination. But I do know that in one of your posts you actually said that maybe we should consider having a serious discipline of imagining the future, and that could actually be really important. And so that would be a bit fun to riff on. I don't expect you to have a perfect answer, but if we actually think about doing that in practice...

[00:44:16] Michael: Actually, can I just clarify something? I think I meant something very specific about that. People have been imagining the future seriously for well over a hundred years — I think about Herman Kahn and people like that, and many others. I was referring very specifically to this task of conceiving of new hyper-entities, in particular, taking this design point of view which says that the most interesting thing you can do is essentially find new verbs. And so I think that was probably the thing I was pointing at — the notion that, for example, the Vickrey-Clarke-Groves mechanism: the idea that if you change the way in which you vote or allocate resources into this kind of quadratic model, this will help you solve public goods problems. That's a shocking set of new ideas. That's an example of some sort of imaginative contribution.

That was the point — it wasn't about predicting the future, it was about imagining the future. And so much of the work that's been done on what's been called foresight studies, or futurism, and various other labels — it's often about prediction and some degree of imagination. But it tends to take the existing objects as being inputs to the process, not really objects of the process. And I'm talking about making them more of an object. Hopefully that's clear.

[00:45:33] Beatrice: Yeah, no, I think it's definitely clear that it's the imagination — which I think is really interesting, because I think that's often what gets taken less seriously, or it's easier to wave off. And maybe harder to see short-term value in some of those things.

[00:45:47] Michael: I think the question of how you validate the imagination is a really interesting one. One of my favorite examples is: Alex has this incredible idea of a topological quantum computer, which is founded in a state of matter that naturally wants to store quantum states. And where if you twist it around in the right way, that will result in quantum gates being applied to it. This should sound ludicrous and almost incomprehensible, but it's such a feat of imagination to conceive that this would be possible — and then actually technically to come up with models in which it's very plausible. But it's those models, and then ultimately actually building the systems, which is what validates the imagination there.

I think something like, actually, the assemblers — the book, first Engines of Creation, but then the PhD thesis, Nanosystems — did all this interesting validation work, showing that it was at least somewhat plausible that you could build these systems and what they would be used to do. So there is some notion, somehow, of validated imagination. Yeah, if that makes sense.

[00:46:59] Beatrice: Yeah, that's true — it's like scientifically grounded, or like maybe it's physically possible.

[00:47:04] Michael: It's physically possible. And also you want some sense that there's something essentially new here. Like, I gave this example of the topological quantum computer, and from the point of view of a quantum physicist fifty years ago, if you told them it was possible to store quantum states and protect them in these macroscopic phases of matter, it would've just seemed impossible — ludicrous. This is not the way quantum physics works. And yet it turns out that, with some caveats, that's the right way of thinking. There's a surprising — it's a shocking scientific fact.

[00:47:39] Beatrice: And what do you think about this space? For example, we used to have the Future of Humanity Institute in Oxford, and that shut down recently. And I've heard some people talk about the need for more macro-strategy thinking. Do you think that there's a void for that? And if you were to decide that we're going to try to do something that works on this — imagining the future — and let's say you got a hundred million dollars or something, what would you do? What would be the best course of action, do you think?

[00:48:14] Michael: I've got a hundred million dollars to do what, exactly?

[00:48:17] Beatrice: To basically put this idea into action of a serious discipline of imagining the future — that we need that.

[00:48:23] Michael: Around this specific idea of imagining future objects, future entities?

[00:48:28] Beatrice: Yeah. I could see it — with the macro-strategy point, maybe those should be kept separate, because yeah, it could be just: the imagining is one thing, and macro strategies seem actually like something else, where you...

[00:48:39] Michael: Interesting.

I just think about where these ideas have tended to come from, and usually they've come from people who are deeply embedded in particular disciplines fairly early. The first arguably foundational paper about AI is Turing's, from around 1950. What I'm getting at is this type of work tends to be deeply grounded in particular fields, but then also somewhat anomalous within those fields. There are very few such papers. There are these strange papers by Richard Feynman and David Deutsch and Richard Jozsa and a few others, about essentially proposing what a quantum computer is. And then once you've got the model, sort of normal science takes over and other people with very similar training develop the idea further — but they're not really conceiving of fundamental new affordances.

What I'm getting at is: when it's very distributed and very embedded in a deep understanding of particular parts of the world, it's actually hard to see. I suppose the idea that I still quite like is probably some sort of vision prize — where you just solicit vision papers of this type. God knows how you get judges who are broad enough to actually judge this sort of thing well. And you'd really not be looking for the flashiest sort of outcome or the most flashy-sounding possibility. You'd be looking for depth and surprise somehow — which is the thing. If I think of a good example: I've given things like topological quantum computers as an example of something surprising. An even more surprising thing, I think, is something like public key cryptography, which just sounds impossible. You shouldn't have to exchange key material to be able to communicate privately. And yet it turns out that, if you understand one-way functions and related ideas sufficiently well — no, actually it's not required. So that just — there's a sense of shock there. Your first thought when you hear about public key cryptography should be "I can prove that's impossible," and then followed by "oh, and here's how it works."

It's just amazing. Actually, there's an interesting connection to nanotechnology. Ralph Merkle, I guess, is one of the inventors of public key cryptography, and he's also one of the pioneers of modern nanotech. That's interesting that he did both those things, because they're very different at some level.

[00:51:07] Beatrice: Interesting that he's done both.

[00:51:11] Michael: Maybe I'm empirically observing that I was wrong before in my statement about the depth of embedding into particular disciplines. Maybe Merkle's a counterexample.

[00:51:19] Beatrice: I don't know. I think the interdisciplinary angle is in general an interesting point — or that's at least at Foresight, we often know that breakthroughs seem to come from interdisciplinary crossover, and maybe especially now. I think that's also one of the promises of AI for science as well — it's what I've heard from things like PaperQA and these sorts of tools, that they want to be able to translate between the disciplines, because there's a lot of information that may not have been translated between fields, and that's maybe where a lot of low-hanging fruit could be now for scientific discovery.

[00:51:55] Michael: I'm very interested in what I call "vision papers." So when people do sketch out molecular assemblers, or sketch out quantum computers or whatnot, those papers do often read quite strangely, and they stick out a little bit in the scientific literature. They often had a very hard time getting published — not always, but that seems to be a very common feature. And I think it's because they're not normal science in the conventional sense. They're proposing some imaginary future object and saying it will have these more-or-less well-defined properties. Sometimes it's very well defined — I think Turing's paper on programmable computers is very clearly defined. Sometimes it's very vague. Alan Kay has this lovely paper about the Dynabook, which is essentially like a much-improved iPad from the 1970s — but that's quite a vague paper. Nobody would know anything about that paper if Kay hadn't then gone and actually done a whole lot of it.

But the question of how you create a venue for that, and how you make it into a serious discipline — like, what's the right standard there? I suspect a lot of people with those papers often have zero impact for a long time — not always, but surprisingly often. So I don't quite know what to do about that. Maybe that's part of the point of a vision prize — if you solicit ideas, you're creating a little bit of a venue for it, and some sort of normative standards around it. And they're not necessarily going to be appropriate for everybody. If they're appropriate for some people, maybe then they start to set. It just becomes a venue for conversation about interesting goals that people didn't formally have.

Yeah, I suppose actually in some sense the computer science community does this a little bit. I think about things like the programming language community, where a lot of the most interesting work is about finding new fundamental abstractions which can be used to control computers. And so there's a little bit of that feeling there — although even there, much of the work that's done is just "let's figure out how to improve our type system a little bit" or whatever. Sort of very technical, incremental work. Very interesting, but — I'm just riffing and not really making much progress, I'm afraid.

[00:54:05] Beatrice: I think that's been my experience also with vision papers — I think it's a really interesting concept. But what you say is true: almost who would be the judge? You would just need someone with great intellectual taste or something like that. And I feel like the judge is almost going to be harder to find than the ideas.

[00:54:25] Michael: Actually, Drexler had a lot of problems being supervised — I think my understanding is at MIT — because: what is he doing? He's inventing a field. And it's interesting the extent to which his advisor, who I believe was Marvin Minsky, actually supervised a lot of people who did that in different ways, in different fields. Minsky, I guess, co-invented the confocal microscope. And his PhD students invented Scheme, I believe, and just all these other things, across a wide range of different areas. Maybe that's an example of almost a factory for these kinds of ideas.

[00:55:05] Beatrice: Yeah, somewhere where it's okay to be more interdisciplinary, or more just... inventing a field, apparently, to be very exploratory.

Let's try to talk about the tools for thought point, because you've obviously written about this. But if some listeners are unfamiliar with this concept, do you want to just explain what we mean when we say tools for thought?

[00:55:29] Michael: Sure. The phrase is due to Ken Iverson — that's his name, from the 1960s. He's the inventor of the APL programming language, and he got very interested in this idea — it was his thesis: "notation as a tool for thought." And he was very interested in this idea, which has a very long history, that somehow the symbols we use influence the way we think, and they influence expressivity.

This is an idea I suppose most often associated with Sapir and Whorf — sometimes regarded as a bit disreputable. The idea that maybe our language influences the way we think a lot. I think if you take a long enough historical view, it's clearly true. Obviously language was an enormous bootstrapping event for humankind, however old it is. We don't really understand the origins of language very well. And then these apparently related things — it's not quite clear where mathematics comes from, or to what extent it's using the same circuits as language, or what it's grounded in — but again, that seems to have expanded our world really dramatically.

The ability to build models on paper — it's such a shocking thing. You think about something like the first nuclear weapons: this began as graphite squiggles on tree pulp. It's such a shock. Szilárd, I think it was 1933, realizing that nuclear weapons would work. And it's based — it's not based on empirics, it's fundamentally based on models of the world which we write down with pencil and paper. That's such a wild fact about the world, that we do have these tools for thought. It seems to be true — I believe it's true, anyway — that much of human history has been about gradually upgrading and improving those tools. You invent numbers probably just to solve resource allocation problems; you gradually improve your number system; you invent Hindu-Arabic numerals. And those enable all sorts of things. And we don't necessarily know what successive steps are going to be found.

It's very tempting to think that modern alphabets and modern written language are the be-all and the end-all. But almost certainly that's not going to be the case. You think about neural interfaces, and it's not at all apparent what the right way of being linked mind-to-mind is, but probably much more interesting ways are possible than merely speaking to one another, or smiling at one another, or using facial expressions and body language and touch and so on. But those are all design questions — so both rooted in what's possible in the world, but also something about imagination.

A real sort of epiphany for me as a kid growing up was the very first time I ever used a paint program on a computer. It was MacPaint on a Macintosh. And it was just shocking to have these tools — I could fill something in, I could create a hatch effect, I could do these kinds of things. And then learning years later that Bill Atkinson, who recently passed away, had invented a lot of those tools. There was a pinball arcade somewhere in Cupertino where a particular light effect was used, and that's where he got the idea for the marching-ants effect, which is to some extent still used today to indicate that you've selected a region. These are all kinds of things we see being done in the external world, and then start to internalize them. And as you internalize them, they change the way you think — they change the thoughts that you can have.

To your point, I don't have a very — I wish I could talk better about this. I'm a few years out of date, actually — I just haven't really been thinking that much about it. But yeah, it is very difficult to talk about.

[00:59:13] Beatrice: For some reason it is. But I recommend people read more — you have a bunch of posts about it on your blog.

[00:59:20] Michael: Yeah.

[00:59:21] Beatrice: I feel like I'm almost asking you the same question again — because it almost feels like a hyper-entity question again — but if you think about tools for thought, are there any that you think would be really high-leverage for creating a better future, if we were able to have them and integrate them more broadly?

[00:59:39] Michael: Not really. Nothing that comes to mind.

[00:59:41] Beatrice: No.

[00:59:43] Michael: I suppose I am historically pretty interested in the connections between capitalism and the changed nature of society. So the invention of a medium of exchange seems to have obviously had a huge impact on the economy, but it's weirdly also had this impact on the ways in which people behave. I guess the Protestant work ethic is probably the best-known example of this — Max Weber's notion. But it seems to have modulated behavior in lots of other ways. There's this — I think it's urban legend — that no two countries which have a McDonald's have ever gone to war with each other, or something along those lines, which I doubt is true, but it might have been true at one point. It's trying to express some idea about the way in which human behavior and collective behavior isn't just modulated in the obvious economic ways, but also modulated in other ways by our economic system. And in some sense, certainly, I wouldn't say money isn't just a tool for thought, but that's part of what it is.

I think that has become more and more clear through work on cryptocurrencies over the last few years, where a lot of those people are explicitly very interested in the question of how does the medium of exchange we use modulate collective intelligence? How does it enable new types of collective action? So in that sense, I suppose it's not an individual tool for thought, but it certainly is a collective tool for thought. So I'm very interested in that question, but I don't have a good answer. I love things like Protocol Labs, with this series of events on public goods and connections with the crypto world — I think that's incredibly interesting. I also think it's very interesting that Vitalik has this terminology, d/acc, which isn't very explicitly connected to Ethereum yet, as far as I know. But I find it very interesting that he's thinking about ways of economically modulating our focus on defensive versus offensive technologies. That's another potential link, but again, not very concrete.

[01:01:40] Beatrice: That's really interesting. Yeah, and "Funding the Commons" — the events...

[01:01:44] Michael: Yeah, "Funding the Commons," that's right.

[01:01:46] Beatrice: Yeah. I think we'll leave the tools for thought there, but it's a really interesting idea that I think we should definitely look closer at — and maybe the best questions don't have any simple answers as well.

I think I want to try to wrap a little bow around this conversation, because I think the reason I do enjoy reading your posts so much is that I never really know what I'm going to get — and that's quite enjoyable. I think it's because you're quite intellectually independent and you don't mind being a bit contrarian, but not out of a need to be contrarian — more like you don't mind it if it's necessary. So it's principles first, rather than interpreting everything based on an ideology, which I find a lot of other people maybe do. So my question for you is: is this something that you do consciously, and is this something that you work to uphold? And do you think that it's important?

[01:02:49] Michael: I don't think terribly consciously. There are various specifics about the ways in which it's enacted which are fairly conscious. But the overall point of view — I trained as a theoretical physicist, and I think that comes out of just an obsession with understanding underlying principles. And then, actually, while doing that, at some point you realize you need to get quite good at mathematics. And probably the most useful thing I ever did for getting better at mathematics was treating books and papers about mathematics not really as things to read, but rather as sets of problems to solve. So you see a theorem and you don't read the proof — you try to prove the theorem yourself. And that's, I don't know, thousands and thousands of hours of that. And yeah, that was done as an expression of personality, but I think at least as much done out of a sense of: "oh, this is the most useful way to get better at this and to understand it more deeply." And then probably sets up a whole lot of habits of mind that then carry over later.

I will say, I've often noticed — I'll be writing something, writing a paper or essay, and there'll be a section that I absolutely hate. And that's often a sign of a real opportunity. Typically what's going on is I'm regurgitating the standard story and realizing that there's something a little bit wrong here. And you tweak and find what is wrong, and then sometimes it will all unravel. And even when you go back and explain the actual point of view that you arrive at, it might not actually be very different from the conventional story. Often there's a lot of wisdom in the conventional story — somehow you've really thought it through and understood it in a way that you didn't previously. But that's — I don't know — it's a compulsion. I'm not sure it's particularly admirable, or chosen, or anything. It's more inevitable.

[01:04:32] Beatrice: It does seem good, just in the sense that there are obviously a lot of, I think, interesting intellectual communities. I think the progress movement recently, or like the effective altruism movement, or these sorts of things — and how do you think — because it seems good, in order to avoid groupthink, which I think a lot of these types of communities are at risk of — do you have any recommendations, like on how to disagree politely or something like that?

[01:05:12] Michael: I disagree politely. I wish I was better at it. It's funny, actually, how much of disagreeing politely is down to individuals. I don't know — I think somebody like Toby Ord, who's one of the founders of Effective Altruism, or Patrick Collison, who's one of the founders of progress studies, or Alexander Berger — he's a key EA person — they're so nice and polite themselves. You can't really disagree in a nasty way with them. You would feel ridiculous. It'd be like being mean to — I don't know — the Dalai Lama or something. Maybe that's actually a good way of thinking about it: you want opponents who are themselves exceptionally polite people, in many ways, to disagree with.

I don't always understand it — some people I will find myself provoked by, and it will seem to me that they're lovely people, and yet there's something in me that is responding. And I find it difficult sometimes to disagree in a productive way with those people. I think that's usually more an indication of something unresolved in me than it is something about them.

Yeah, so it's a good question. Something I remember seeing years ago was Camille Paglia being interviewed, and she was incredibly interesting, and she had so many fascinating things to say. But at a few points she got on the subject of academic English departments, and she had a lot to say about this because she'd spent much of her career in them. And I remember thinking after a while: I wish she'd had better enemies — just in the sense that I got the sense that a lot of the people who she'd found herself opposed to there didn't deserve somebody of her sort of level of imagination and whatnot. It was a minor point — obviously not the main thing about her life — but I remember thinking: oh, you want to pick good enemies to do interesting work. I don't mean enemies in a particularly serious way there, but people who you disagree with in very productive ways is somehow very helpful.

It's a lovely thing. I think about many effective altruists in general — they're just a delightful group of people to disagree with. Such an interesting group in that regard.

[01:07:19] Beatrice: Yeah, maybe that's the way we wrap it up: we need better enemies, but also better friends to disagree with. I think...

[01:07:28] Michael: That's really the right way of putting it. And I'm certainly not referring to them as enemies — but that was... Seeing that interview actually had a very deep impact on me, realizing that you do need — it's very healthy somehow to be disagreeing in very productive ways.

[01:07:49] Beatrice: Yeah. I actually — this is a bit self-advertising maybe about Foresight — but I do think Foresight is a really interesting community in that sense, that there will be very many different viewpoints at a Foresight event, as has been my experience. Which definitely, I think, just opens up your range of what's possible, and yeah.

[01:08:11] Michael: It often feels like a sort of crossroads in a really interesting way, where caravans have set out from many lands to come there and meet and exchange views.

[01:08:23] Beatrice: Yeah.

[01:08:24] Michael: It's very interesting in that kind of way. It's almost like a facilitator — it's almost like a specific Amsterdam coffee house in the 16th century: a place to bring people together, so hopefully they can have productive disagreements and whatnot.

[01:08:40] Beatrice: Maybe that's the note that we end on. Thank you so much, Michael.

Read

RECOMMENDED READING

Concepts

Projects & Events