Podcasts

Michael Nielsen on Hyper-entities, Tools for Thought, and Wise Optimism

about the episode

How do we imagine, validate, and steer toward better futures?

In this conversation, scientist and writer Michael Nielsen joins Beatrice Erkers to explore the idea of “hyper-entities”, future artifacts that reshape our capabilities and the verbs we use to describe them. They discuss how science fiction, public goods mechanisms, and open science feed into real-world innovation, and how imagination and design shape the trajectory of civilization.

Michael reflects on dual-use technologies, from quantum physics to cryptography, and explains why deep truths about the universe often come bundled with both promise and peril. They also dive into "tools for thought," kindness as a moral technology, and why exploration, however illegible, is crucial for progress.

‍

About Xhope scenario

Xhope scenario

No items found.

Transcript

Beatrice: So, today's guest is Michael Nielsen. Michael is a scientist and a writer, I think maybe best known for his work on open science, quantum computing, and tools for thought. He's been a leading voice in, I think, reimagining how science is done in this age. And Michael, you have this really broad background and scope of things that you've worked on, obviously. I do love reading your blog; it's a very broad range of writing. And I think in this conversation, what would be really interesting is exploring and connecting many of these various topics that you've written on to this idea of existential hope and just positive futures in general.

So maybe we just dive in. I think starting with one of the central concepts that, when I found it in your work, I thought was a really interesting way of putting it, which is this concept of a "hyperentity." Could you maybe explain how you came up with that concept and dive into what it is? Is it different from something like a meme, for example?

Michael: Sure. Actually, it's funny; it's a little bit embarrassing. I wrote this long piece—40,000 words—about how to be optimistic in the face of artificial superintelligence. And that's a very heartfelt, very serious piece from my point of view; it really mattered to me. But there is this little rant in the middle of it about these hyperentities. And it really was just expressing a sort of irritation. I've been a little bit surprised, shocked even. A lot of people have told me that actually, that's the piece of the essay that they responded to. It was an aside from my point of view.

But it's really riffing on some older notions, which I didn't feel were quite the right term for what I wanted. I just wanted a term for an imagined future object, some sort of hypothetical future object. So, artificial superintelligence or quantum computing, or in the past, you might have had things like universal programmable computers or heavier-than-air flying machines. Those were examples from the past where somebody has imagined a future entity, and that has then served as a design goal.

The aspect that I'm interested in is the extent to which imagination was required to do the design work. A notion like a universal programmable computer seems very obvious today because our culture carries it, but at the time, it was a shock and required enormous imagination and enormous understanding to come up with. And those are the connotations that I was interested in carrying.

A good one, actually, for the Foresight Institute—which is associated with many, but I think in particular one of the founders of the Foresight Institute, Eric Drexler—is his notion of a "molecular assembler." And that's a great example of something that required a lot of depth of understanding and imagination to come up with, but after the fact, seems pretty obvious. It's not just Eric who did that; I mean, people like Heinlein and Feynman are sometimes given partial credit. But together, this group of people had this very deep insight, and then it served as a kind of orienting vision. Sorry, that was a very long answer, but it provides some of the context.

Beatrice: No, it's perfect. I think the only thing that I would have heard someone say similarly is a "future artifact" or something almost. That's something that when we've done workshops on world-building and these sorts of things, that seems like the term that most people use.

Michael: Sure. I guess there are a couple of other terms. Bruce Sterling has this nice term, "spime," which he uses to mean something quite similar. It's got a little bit more of a focus on an object which has a lot of knowledge about itself. Nick Land has this term, "hyperstition," which is an alternative to the term "superstition." It's really a short phrase for a self-fulfilling prophecy, something which becomes so attractive to existing institutions that it then forces itself into existence.

But those terms are slightly different in the sense of their connotations. Although they're talking about future entities, they're not really talking about the particular actions that you do with them. I'm interested in the way in which these kinds of entities tend to carry new verbs with them. So, the idea of a programmable computer. The idea that you specify a set of instructions and a molecular assembler will be able to build any type of material object within some very broad parameters. You sort of have these new kinds of verbs or "affordances" in design terminology. So, it's very much a design point of view.

Beatrice: What was the first term you said there? Spime?

Michael: Spime. It's from the science fiction author Bruce Sterling. Actually, he wrote it in a long essay about the design of the "internet of things," which was the context. It's quite a nice term, but it doesn't seem to have really caught on. I've heard a few people use it.

Beatrice: Not like hyperentity, apparently. That's the one that caught on. Well, if I remember correctly from your post, you were also suggesting that maybe Silicon Valley tends to undervalue hyperentities. Do you want to comment on that?

Michael: Yeah, sure. I mean, undervalue... not exactly. They rely completely on there being a supply of hyperentities. It's just the way in which the economy of Silicon Valley works. There's no way of building a moat around an idea. Once somebody's had the idea, it's very easy for it to be copied and for other people to act on it. So typically, it's coming from other parts of the world. Academia and art, and the penumbra around them—I sometimes talk about "para-academia"—those tend to be places where these ideas come from.

The examples I gave before—molecular assemblers, artificial superintelligence, and universal computers—they all come out of academia or the little area around it. There are examples like virtual reality, which actually comes much more out of the artistic community. What's going on there is that the economy, so to speak, in those cases is a reputation economy, which is built around citation or attribution of ideas. So somebody like David Deutsch can really build a reputation and a career around an idea like quantum computing. But the economy in Silicon Valley works differently. It's around equity and around ownership of the means of production.

And so you have these two different systems that are coupled in some way. The ideas coming out of academia are often used as almost a feedstock for Silicon Valley, but it is, I think, somewhat separate. The way I tend to think of it is that often in Silicon Valley, ideas are very instrumentally useful in understanding, but they're done in service of whatever the product and whatever the market is. Whereas in academia, and also to some extent in a slightly different way in the artistic world, the ideas and the understanding are fundamental. That's the object. That's what you're actually trying to do. And your ability to execute, to build the apparatus or whatever, that's a secondary thing. That's instrumental to the fundamental goal, which is to obtain that understanding, to obtain the deep new ideas. I think of them as two coupled economies, fairly weakly coupled; they both have their own reasons for existing. I don't want to get into any judgment. My personality is such that I'm happier in the artistic-scientific realm, but of course, the world is made up of things which come out of the other realm. So they're both very, very important.

Beatrice: Yeah, we had Ed Finn on, who was the editor of the Hieroglyph book, where they connected scientists with sci-fi authors to try to come up with scientific artifacts, I guess, future artifacts. Do you see sci-fi as a potential tool for creating hyperentities, or do you think that's maybe too cliché?

Michael: I mean, historically, people love to point to the example of Arthur C. Clarke and geosynchronous satellites, which actually wasn't his idea, although he popularized it. And there are a few examples. I constantly find Vernor Vinge, the science fiction author, really remarkably prescient in how he viewed the future. I actually see almost more connection to design than to science fiction. This idea of developing new classes of objects with new types of actions, new types of interfaces—I sort of think of it as more a meld of scientific understanding and that.

The fact is, nature is just so much more imaginative than we are. And so a lot of the most remarkable things that have been discovered—you think about general relativity or superconductivity or ideas like the fact that light is an excitation of the electromagnetic field—these are mind-boggling facts, just utterly shocking, so much more interesting and surprising than anything a human being has ever imagined de novo.

So in that sense, I think you actually need that marriage of very deep insight into how the world works with a design point of view. Science fiction writers to some extent have that, as do designers. But the science fiction writers themselves, I think, often don't have quite enough depth of insight into the world. They're stuck recycling ideas that they've seen, often very effectively, and they can see all sorts of interesting consequences, but it's not quite the same as invention.

Beatrice: Yeah. Do you think that there should be more conscious effort, for example, from Silicon Valley in trying to design hyperentities or come up with new ones to generate excitement or a direction?

Michael: Not really. I mean, they're doing what they do. A good example is Palmer Luckey, the founder of Oculus. He was in, I think it was at USC, a fairly basic science lab that was just doing experimentation with simple prototype VR systems. So that's, in my opinion, an example of the supply chain working reasonably well. You've got this fundamental work being done, which in its own way doesn't need any justification, but in practice, it does feed in. There's always this interesting friction between the two worlds because they have two such different fundamental objectives. But I don't think Silicon Valley particularly needs to change what it's doing there.

There is, I suppose, this interesting question about public goods production, which is that the nature of ideas and understanding is that they most naturally want to be public goods. That's in many ways best for our society. It frustrates me a little bit to imagine what's in the internal mailing lists of companies like Google and Facebook and Microsoft, and for that matter now companies like OpenAI and Anthropic. There must be just an astounding amount of understanding hidden there that would be tremendously valuable if it was made public for the world but is going to be forever lost. I'll bet that most of what we know about distributed systems is basically hidden inside old Google mailing lists, but it's not in the company's interest necessarily to be releasing that. That's one point of frustration, but I think it's intrinsic to the situation. The companies do have a lot of vested interest in keeping that stuff private, unfortunately. I don't know how to solve that design problem.

Beatrice: Yeah, I think on the Palmer Luckey point, I heard him say that he got all his best ideas for weapons—because he works on that now—from sci-fi, which is interesting and maybe not the way that I was hoping people would get excited from sci-fi. But with the hyperentities term, there is something there in terms of, if we actually want to think about how we can steer towards not just a future but a better future, we should maybe put more effort into how we design the hyperentities that actually end up taking hold. Obviously, it's a very complex and maybe random thing what becomes the next hyperentity.

Michael: Yeah, it is a bit strange. And this is actually connected to Nick Land's point about hyperstition—the extent to which different things become popular or become seen as targets. In some sense, the difference between now and 2020 in terms of the possibility of AGI is not that large, but the difference in belief is absolutely enormous.

I think about a parallel world—cryonics, say. You imagine that the same degree of belief and capital had been infused there. What would that field look like now? It would look probably very, very different. You still have people working on it, but just not at the same scale. You don't have thousands of people coming into Hayes Valley or wherever trying to work on this. So that degree of belief and self-fulfilling prophecy is actually pretty important. And it seems a little bit random. It's certainly very influenced by single actors. If tomorrow, some very important investor who participated in prior rounds comes out and announces that they think OpenAI should have a down round, that would have a very large impact. And it's potentially just down to decisions by a few people. Obviously, I don't want this to happen, but if Taiwan was invaded—and again, that decision can just be made by a small handful of people—that would also have an enormous impact on this degree of belief. It's funny how that works.

Beatrice: Yeah, it's hard to steer, I guess, or hard to design ahead of time.

Michael: Well, it's possible to steer, but it's so dependent on contingent actions. It's some interesting combination of power and truth. Power does matter a lot in the short term. Truth matters, I think, more over the long term. If AGI is not going to happen through anything remotely related to LLMs, then you can pour as many trillions of dollars into it as you want, and it doesn't matter. That's just a fact about the world. I'm not saying it's true, but I'm saying that that's a potential way the world is. You can make two plus two equal five for a while with enough money, so to speak, but over the long term, reality wins.

Beatrice: That's true, reality wins. My last question on the hyperentities point would be: because obviously AGI is the hyperentity of the moment, at least in our sphere maybe, are there any other ones that you're personally excited about? If you think about what would bring potentially the most value to the world, what would those be?

Michael: It's interesting you say it's "of the moment." In some sense, renewable energy is actually still more important. I think fossil fuels are roughly a $3 trillion industry, so much, much larger than what is conventionally called "tech." The ability to replace that is in some sense more important. And the idea of cheap photovoltaic power and cheap batteries remain these orienting visions. They have been for decades, although I think probably a lot of people didn't really—some people still don't—believe in those.

To answer your question about other valuable hyperentities, I am very interested in and excited by design mechanisms for solving public goods problems and collective action problems. So, ideas like assurance contracts, Alex Tabarrok's idea of dominant assurance contracts, the Vickrey-Clarke-Groves mechanism, which Glen Weyl and Zoë Hitzig and Vitalik Buterin have been working on with quadratic funding. These sorts of ideas are very interesting. If you can solve the public goods problem or collective action problems, that's just totally transformative for civilization. It's probably related to—it's not the same as—these centuries-old dreams of global governance working, and we've never really figured out how to do that. I suppose things like the nuclear non-proliferation treaty and the Vienna and Montreal protocols are very important progress that seems somehow related, but gosh, our governance mechanisms seem primitive.

Beatrice: I think it's a really interesting point because with the Existential Hope program, I've done a lot of workshops on what future people want to see and then also what they think is the biggest blocker. And a really common theme is just "human nature" or similar things.

Michael: Last year I read Tom Holland's book, Dominion, which, to some extent, is a history of kindness. It's a history of charity. He's particularly interested in what influence the Christian tradition had on our modern notions of what it means to be a good person. And he claims, I think fairly plausibly, that it really contributed a lot to people valuing kindness and charity. Obviously, people selfishly value kindness—we like it when other people are kind to us—but they don't necessarily hold it up as one of the chief virtues. He claims that ideas like "turn the other cheek" and "love thy neighbor as thyself," which come out of the Christian gospels, were amplified by Christianity not just in the Christian world but also in the secular world. He traces how it incredibly influenced the Enlightenment and other cultures.

Describing it as a "social technology" is a bit silly, but it is very interesting the extent to which those notions are constructed out of stories, out of myth, and become transformative for civilization. I don't know that you can get things like human rights—I'm reading about the history of human rights at the moment—or the suffragette movement and the Civil Rights Act without those things.

Beatrice: Yeah, you can almost think about them as social technologies. It's especially interesting now when we think about what we want to instill in our AIs. We want to make sure we take off with these values, not other values.

A lot of your writing is about your excitement for technology, which we at Foresight share. If you read Drexler's Engines of Creation back in the 80s, it's about all these exciting things we could do. But then there's also the point about all the very scary things that could also be achieved. You wrote a post about how to be a "wise optimist" about technology. Could you maybe help us unpack what that means to you and why it's important?

Michael: Sure. I was just amused by the growing use of the term "optimism" in discussions about technology and what seemed like a very bad misuse. If you go to see your doctor and you're diagnosed with cancer, the optimistic path isn't to ignore it and pretend it's not happening and just hope that it will spontaneously go into remission. The optimistic path is to really take it on board and not get depressed—that would be the path of pessimism—but to take it on board and ask, "What can I do about this situation?"

A lot of what passes for optimism in tech seems to me like a foolish optimism, where it's like, "No, no, we're not going to think at all about the problems that these AGIs could cause." In fact, it's actually quite pessimistic because it relies on you believing that they're not going to be particularly capable. If you look at the people who believe they're going to be the most capable—they're the most optimistic about capabilities—they are the people who are actually the most worried. So the actual optimists about capabilities are very worried. They're the people who are facing up to the fact that there is this very difficult diagnosis. And I think the wisely optimistic response is to ask, "What can we do about it?" That doesn't mean you can necessarily find an answer; it may be terminal. But at least you can engage seriously with the actual state of affairs and do your darnedest to find positive things to do.

Beatrice: That's actually really true, that the ones that are the most optimistic about these technologies are also the most scared. I feel like that's definitely an observation I've made with people like Nick Bostrom or Anders Sandberg. They were transhumanists who got really, really excited about the future and then were like, "Wait, what if all these things get in the way of making that future happen?" And so I feel like that's how we ended up here.

Michael: The first jet airliners, a lot of them crashed. That problem needed to be solved. In fact, de Havilland went out of business because of it with their Comet. And there are so many examples like that. The early refrigerators leaked ammonia gas and killed people. They needed to replace it. They replaced it, funnily enough, with chlorofluorocarbons, which was a big step up—it didn't immediately kill people—but of course, it did have this other problem. That pattern of being honest about the problems and figuring out how to fix them is foundational for a good... well, foundational for existential hope.

Beatrice: Yeah. In the same post, you also write about the fact that advances in science bring us closer to the truth, but truth is dual-use. Is that still your take? And does that mean we want to get as close to the truth as possible? Because when I ask people about their most existential hope future, "learning as much about the universe as possible" is one of the most common things they say. What's your take on that as our ultimate goal?

Michael: I don't know, that seems a little too Pollyanna-ish to me. A good example: I spent much of my career as a quantum physicist, and there's this astounding set of ideas developed from about 1900 to 1926 which explains the most fundamental things about how the universe works. It's very important for later things, if we want to understand proteins or DNA or modern semiconductors, but it's also part of what led to the understanding of nuclear physics that resulted in nuclear weaponry. I don't think you get that a la carte, unfortunately. It's all or nothing. Once you've got quantum mechanics, you're going to get all of those things.

This seems to be just an empirical observation, I can't really prove it, but so often, any deep understanding of reality ends up having both many positive uses and many negative uses. I don't really understand quite why there is this intrinsically dual-use nature of deep understanding. A really nice example is this famous declaration by the mathematician G.H. Hardy that he couldn't think of anything more useless and more beautiful than number theory. And then, not that long after his death, GCHQ and the NSA discovered that number theory could be used to develop cryptographic systems with all these interesting military applications. He was kind of wrong. Even the most apparently useless set of ideas that seems so disconnected from human reality turns out to be very connected. Much of the reason for the NSA's funding of quantum computing is because they want to be able to break these codes. I don't know what Hardy would have made of it, but again, these very deep facts somehow show up in very mundane and sometimes very dangerous ways.

Beatrice: It's definitely dangerous.

Michael: It's pretty strange, actually. You see this so much. Every once in a while, I'll see the Riemann zeta function in an unusual place. These things that seem so obscure and so disconnected turn out eventually to be very important in the everyday world.

Beatrice: Well, I guess there is something very easy to get excited about in that. It is nice when you're able to make connections and expand, you know, the Star Trek vision. That seems like the most exciting vision that anyone has been able to come up with so far that most people I've talked to can get behind. And that's so much about just continuing to discover the universe.

Michael: Yeah, I don't know. I have not watched Star Trek. I've watched probably a tiny handful of episodes, only from the original series. Something that's funny about a lot of these visions is they're often a bit strange from an emotional or an artistic point of view.

I made this point before about the spread of kindness as something to be valued, which eventually gets built into governance. I think you often don't see so much of that kind of thinking going into visions of the future. Like, what other fundamental emotional stances will people create and start to spread? A notion like "love your neighbor as yourself" is a shockingly imaginative idea to have and then to evangelize. And I'm just wondering what other ideas like that, which have to do with this basic stance towards the other, might exist.

Modern ideas about veganism and who should be treated as a moral patient are obviously very interesting. I think notions around identity are likely to fracture an awful lot in coming years. They're already fracturing in interesting ways in human beings, but they're going to fracture a lot more with the ability to merge, refactor, change, and distill artificial minds. The boundary around a human being is very essential, but those boundaries are just going to be utterly changed. It's going to be very hard to say what it means for there to be a boundary around them. I never quite know when using Claude or whatever, am I using the same model as I was using last week? To what extent are they tweaking it all the time? And of course, there is that funny feeling I think many of us get where you keep starting new instances, and it's like you're starting a relationship again and again and again with this thing that has some human characteristics. It's a very strange experience. It's a very human experience in some ways, but also a very inhuman experience. This reset on identity over and over again—it's the Groundhog Day of personalities.

Beatrice: Yeah, I think on the values point, that's definitely one of the most underexplored areas, especially in the current context. I think there are potentially exciting applications for things like moral circle expansion through new ways of communicating, like neurotechnologies. To be able to communicate with someone on a whole different level of granularity or nuance could be a new moral revolution. That would be the best possible outcome.

Michael: It certainly seems like it'll be an expansion and a change in really interesting and challenging ways.

Beatrice: Yeah. Unless you have anything else to say on that, I'll jump to another point.

Michael: Can I just mention one thing? When this kind of point is made, people will very often name David Pearce's "Hedonistic Imperative." And I find it interesting that they always name the same example. When you notice that happening in any domain, it's often a sign that not much is really going on over here. That idea is very interesting, but it indicates to me that the area is a bit underexplored.

Beatrice: Yeah, it's interesting if it's underexplored or if there's nothing else there. But I do feel like there is something there.

Michael: There's no way there's nothing else there. You think about how capitalism, for example, has modulated our values in so many different ways. Any changes to that are going to modulate our values in very interesting ways.

There's too much moral change over the last... I read a little New York Times piece from about 1905 which recorded that a pygmy had been brought from Africa and was now being housed in the Bronx Zoo. The article wasn't exactly approving, but it wasn't exactly shockingly disapproving either. Can you imagine that headline now? It's impossible to imagine. It just seems ludicrous, and yet this was a hundred years ago. It's funny how slow and stop-start moral progress seems, and yet as soon as you go over a hundred years, there are definitely really big changes. And people like Martin Luther King or Gandhi or Mary Wollstonecraft make such enormous differences in the way we think.

Beatrice: Yeah, I was thinking about this because I was reading a book about women working in a hotel a hundred years ago, and I was like, "We get to work!" And that was the exciting thing then.

When people mention only one example, it is interesting with futurism in general. I feel like this has been the case for a while. When I've talked to people like Christine Peterson, our co-founder, who's been around since the 80s with Drexler, it's the same things being discussed. Obviously, things feel more real now. For example, AI feels like it's actually coming. She said that feels different compared to when they were talking about nanotech in the 80s; we can see the progress much more clearly now. But I think that's why people connect to the hyperentities idea so much—because the vision for the future is a little bit outdated. Some people can get behind it, and the Silicon Valley mafia doesn't really mind that type of 80s vision of the future, but it doesn't feel like it speaks very broadly to people on a global scale. Even if you prompt your LLM to create a vision of 2035, it's going to be silver clothing and white houses, because that's how people imagined the future in the 60s.

Michael: Yeah. I think this is something where a lot of the best science fiction writers are doing really interesting things. I love C.J. Cherryh, for example. She has these just beautiful ways of thinking about identity. Charlie Stross has a really nice, short review of Cherryh. He points out that most people think of an identity crisis as being, "Am I in the right job?" Cherryh's characters' identity crises are about, "Am I in the right species? Am I sentient?" And it's a good description. She's very good at thinking in that way and just expanding your own consciousness of what's possible.

It's also something I love about some of Ted Chiang's writing, which tends to be more near-future. But again, he's a person that has this very expansive notion of what being a human being is about. It tends to be a bit more interesting than just being about technology. In a funny way, I suppose I think of technology as the interface between humanity and the universe. And so I'm putting a lot of focus on humanity there, not so much on the universe. Maybe as a writer, you need to focus a little bit on one of those three things.

Beatrice: Yeah, I wasn't familiar with C.J. Cherryh. I feel like I have to check her out.

Michael: C.J. Cherryh, yeah. She's won all the awards, and her book Cyteen is probably her best-known book. It's one of my favorite novels. It's very slow to get started—I almost gave up several times in the first 100 pages—but I've probably reread it half a dozen times now. It's a very interesting exploration of what it means to clone an identity, and it explores that through multiple instances in very different and remarkably insightful ways. Her background is really interesting. She was, I believe, trained as an archaeologist, and it really gives her—she's not a technologist—this other very deep tradition of how people enter the world. There's something very interesting going on that I don't know enough about archaeology to understand, but you can feel it. It's like, this person has a depth to them that is very, very interesting.

Beatrice: We'll link to it in this episode so people can explore it, myself included.

So, it relates to what we've already touched on, but I know that you wrote that breakthroughs like language don't necessarily come from goal-driven processes. If we're trying to invest very wisely for the most optimistic future, do you think that we should try and engineer for a specific outcome? Or are there underlying systems we should focus on, like tools or institutions that will open up the possibility for positive surprises? Is it these very concrete ideas that we should aim for, or is it more like creating a fertile ground of discovery that we should invest in?

Michael: I mean, I think you need to do both those things. My personality is to prefer the more exploratory approach. One of my favorite facts is that Charles Darwin didn't go on the Beagle to do biology; he thought he was going to be doing geology with a little bit of biology on the side. Talk about undirected exploration paying off. People like Newton didn't have a goal of inventing modern science; he couldn't have. He discovered the goal. Einstein couldn't have written a grant application to change our notions of space, time, energy, and mass. He was just exploring, and those are things he discovered after the fact. So that's my personal prejudice.

But by the same token, you pointed at things like the Apollo program as really interesting unifying visions. In an interesting way, your question is about the value of coordination versus exploration. If you're going to coordinate, goals are very useful. You build LIGO this way, you do the Apollo program this way, you build the LHC this way. And exploration is more something which is done by individuals or small groups.

Part of the reason I feel passionate about defending exploration is my sense that the current time in history is a little bit anomalous, in that the tendency of bureaucrats and leaders is to prefer a goal-oriented approach. It's a very natural thing to want to do. You can make the big announcement, "We're going to cure cancer," and direct a lot of money that way. Whereas this undirected approach where you don't actually know what is going to be found is harder to defend. It's so much more hunch-driven and, in particular, often in very illegible ways.

Vannevar Bush wrote a famous memo which led to the foundation of the National Science Foundation in the United States. I believe it's the first place that the term "basic science" is ever used. That was a coinage as part of a political argument in favor of this kind of basic, undirected, exploratory research. And he won that argument really comprehensively, which is I think why there is as much support for basic exploration as there is now. But the natural pendulum is in the other direction. That's kind of the high-modernist approach: to map out where you're going to go. My friends, Adam Marblestone and Anastasia Gamick, are running Convergent Research, which builds these focused research organizations. And they're lovely, and I am very glad that they're doing that, but I don't want a future in which everything's an FRO either. I think that would be bad.

Beatrice: Yeah, I think as a funder, it's definitely very scary to fund exploration because you may end up with nothing.

Michael: Well, in fact, you will end up with nothing almost all the time. Although, how much future work is justified by Max Planck inventing quantum mechanics, or Albert Einstein inventing general relativity, or Darwin inventing evolution, or Marie Curie and radioactivity? You can only set goals if you know what's out there. And yet by definition, we actually haven't seen most of the universe. Most of the parameter regimes are still completely unexplored. And so you can't really plan well in those places, and yet historically, they've been the most exciting places to go.

Beatrice: So, to sum up that point, it feels like it's pretty clear that both are needed. We need the exploration that probably takes some people with the guts to fund it, and we need coordination. But I know that in one of your posts, you actually said that maybe we should consider having a "serious discipline of imagining the future." I think it would be fun to riff on that. If you were to decide that we're going to try to do that in practice, and let's say you got a hundred million dollars, what would you do?

Michael: I think I meant something very specific about that. People have been imagining the future seriously for well over 100 years. I was referring very specifically to this task of conceiving of new hyperentities in particular, taking this design point of view which says that the most interesting thing you can do is essentially find new verbs. It wasn't about predicting the future; it was about imagining the future. So much of the work that's been done on foresight studies or futurism tends to take the extant objects as an input to the process; they're not really an object of the process. I'm talking about making them more of an object.

Beatrice: Yeah, no, I think it's definitely clear that it's the imagination, which I think is really interesting because that's often what gets taken less seriously or it's harder to see the short-term value in.

Michael: Well, the question of how you validate the imagination is a really interesting one. One of my favorite examples is Alexei Kitaev's incredible idea of a topological quantum computer. It's founded in a state of matter that naturally wants to store quantum states, and if you twist it around in the right way, that will result in quantum gates being applied to it. This should sound ludicrous and almost incomprehensible, but it's such a feat of imagination to conceive that this would be possible and then technically to come up with models in which it's very plausible. It's those models, and then ultimately building the systems, which validates the imagination there.

Something like the assemblers. In Engines of Creation, and then the PhD thesis Nanosystems, Drexler did all this interesting validation work showing that it was at least somewhat plausible that you could build these systems. So there is some notion of "validated imagination," if that makes sense. It's physically possible, but also you want some sense that there's something essentially new here. If you told a quantum physicist 50 years ago that it was possible to store quantum states in these macroscopic phases of matter, it would have just seemed impossible. This is not the way quantum physics works, and yet it turns out that, with some caveats, that's the right way of thinking. There's a shocking scientific fact there.

Beatrice: So either we could take five minutes to talk about tools for thought, but maybe that's too short...

Michael: Beatrice, I'm happy to talk. If you want to extend it by 15 minutes, that's fine by me. Up to you.

Beatrice: Okay, well let's try to talk about the tools for thought point just before we wrap up. You've obviously written about this. I often find it hard to talk about tools for thought because it's like, yes, but we don't have the tools for talking about it. If some listeners are unfamiliar with this concept, do you want to just explain what we mean when we say "tools for thought"?

Michael: Sure. The phrase is due to Ken Iverson from the 1960s; he's the inventor of the APL programming language. He was very interested in the idea that our notation is a tool for thought. This idea has a very long history, that the symbols we use influence the way in which we think. It's most often associated with Sapir and Whorf, sometimes regarded as a bit disreputable. But if you take a long enough historical view, it's clearly true. Language was an enormous bootstrapping event for humankind. And then apparently related things, like mathematics, have also expanded our world really dramatically.

The ability to build models on paper is such a shocking thing. You think about the first nuclear weapons. This began as graphite squiggles on tree pulp. It's such a shock. Szilárd, I think it was 1933, realizing that nuclear weapons would work. It's not based on empirics; it's fundamentally based on models of the world which we write down with pencil and paper. That's such a wild fact about the world.

Much of human history has been about gradually upgrading and improving those tools. It's very tempting to think that modern alphabets and written language are the be-all and end-all, but almost certainly that's not going to be the case. You think about neural interfaces, and it's not at all apparent what the right way of being linked mind-to-mind is, but probably much more interesting ways are possible. A real epiphany for me as a kid was the very first time I ever used a paint program on a computer. It was MacPaint on a Macintosh, and it was just shocking to have these tools. I could fill something in, create a hatch effect. And then learning years later that Bill Atkinson had invented a lot of those tools. There was a pinball arcade where a particular light effect was used, and that's where he got the idea for the "marching ants" effect that is still used today to indicate a selected region. We see these things being done in the external world and then start to internalize them, and as you internalize them, they change the thoughts that you can have. To your point, I wish I could talk better about this. It is very difficult to talk about.

Beatrice: Yeah. For some reason it is, but I recommend people read more because you have a bunch of posts about it on your blog. If you think about tools for thought, are there any that you think would be really high-leverage for creating a better future if we were able to have them?

Michael: Not really, nothing that comes to mind. I suppose I am pretty interested in the connections between capitalism and the changed nature of society. The invention of a medium of exchange obviously had a huge impact on the economy, but it's weirdly also had an impact on the ways in which people behave. The Protestant work ethic is probably the best-known example. It seems to have modulated behavior in lots of other ways. There's this, I think it's an urban legend, that no two countries which have a McDonald's have ever gone to war with each other. I doubt it's true, but it might have been true at one point. It expresses an idea about the way in which our collective behavior is modulated by our economic system.

In some sense, money isn't just a tool for thought, but that's part of what it is. That has become more and more clear through work on cryptocurrencies, where a lot of those people are explicitly interested in the question of how the medium of exchange we use modulates collective intelligence and enables new types of collective action. In that sense, it's not an individual tool for thought so much as a collective tool for thought. I'm very interested in that question, but I don't have a good answer. I love that Protocol Labs has this series of events on public goods and connections with the crypto world. I think that's incredibly interesting. I also think it's very interesting that Vitalik Buterin has this terminology, "d/acc," that he's thinking about ways of economically modulating our focus on defensive versus offensive technologies. That's another potential link, but again, not very concrete.

Beatrice: That's really interesting. It's the Funding the Commons events. We'll leave the tools for thought there, but it's a really interesting idea. Maybe the best questions don't have any simple answers.

I want to try to wrap a little bow around this conversation. The reason I enjoy reading your posts so much is that I never really know what I'm going to get. It feels like you're quite intellectually independent and you don't mind being a bit contrarian, but not out of the need to be contrarian, more like you don't mind it if it's necessary. It's principles-first rather than coming from an ideology, which I find a lot of other people do. My question for you is, is this something that you do consciously, and is this something that you work to uphold?

Michael: I don't think terribly consciously. I'm trained as a theoretical physicist, and I think that comes out of just an obsession with understanding underlying principles. To get good at that, you need to get quite good at mathematics. Probably the most useful thing I ever did for that was treating books and papers about mathematics not as things to read, but as sets of problems to solve. You see a theorem and you don't read the proof; you try and prove the theorem. I did thousands of hours of that, and it sets up a whole lot of habits of mind that carry over later.

I will say, I've often noticed when writing a paper or essay, there'll be a section that I absolutely hate, and that's a sign of a real opportunity. Typically, what's going on is I'm regurgitating the standard story and realizing that there's something a little bit wrong. You tweak and tweak and find what is wrong, and then sometimes it will all unravel. Even when you explain the actual point of view that you arrive at, it might not be very different from the conventional story—often there's a lot of wisdom in the conventional story—but somehow you've really thought it through and understood it in a way that you didn't previously. But that's a compulsion. I'm not sure it's particularly admirable or chosen or anything. It's more inevitable.

Beatrice: It does seem good in order to avoid groupthink, which I think a lot of interesting intellectual communities, like the Progress movement or the Effective Altruism movement, are at risk of. Do you have any recommendations on how to disagree politely?

Michael: I wish I was better at it. It's funny how much of disagreeing politely is down to individuals. Somebody like Toby Ord, one of the founders of Effective Altruism, or Patrick Collison, one of the founders of Progress Studies, or Alexander Berger, a key EA person—they're so nice and polite themselves, you can't really disagree in a nasty way with them. You would feel ridiculous. It would be like being mean to the Dalai Lama or something. It's good to have opponents who are themselves exceptionally polite people to disagree with.

I remember seeing an interview years ago with Camille Paglia. She was incredibly interesting, but at a few points, she got on the subject of academic English departments. I remember thinking after a while, "I wish she'd had better enemies." I got the sense that a lot of the people she'd found herself opposed to didn't deserve someone of her level of imagination. You want to pick good enemies to do interesting work. I don't mean enemies in a particularly serious way, but people who you disagree with in very productive ways are somehow very helpful. That's a lovely thing about many effective altruists in general. They're just a delightful group of people to disagree with.

Beatrice: Yeah, well maybe that's the way we put the bow around it. We need better enemies, but also maybe better friends to disagree with.

Michael: I mean, that's really the right way of putting it. I'm certainly not referring to them as enemies. Seeing that Paglia interview really had a deep impact on me, realizing that it's very healthy to be disagreeing in very productive ways.

Beatrice: Yeah, this is a bit of self-advertising for Foresight, but I do think Foresight is a really interesting community in that sense. There will be very, very many different viewpoints at a Foresight event, which I think just opens up your range of what's possible.

Michael: Foresight often feels like a crossroads in a really interesting way, where caravans have set out from many lands to come there and are meeting and exchanging views. It's very interesting in that way, almost as a facilitator. It's like an Amsterdam coffee house in the 16th century, a place to bring people together so they can have productive disagreements.

Beatrice: Exactly, it's a safe space for all sorts of opinions and ideas. Well, maybe that's the note that we end on. Thank you so much, Michael.

Michael: Thanks, Beatrice.

‍

Read

RECOMMENDED READING

Concepts

  • Hyperentity – a future artifact that shapes design, behavior, and technology
  • Tools for Thought – cognitive systems (e.g. language, notation, software) that extend thinking
  • Dual-use Technology – tech that can be used for both civilian and military (or positive and harmful) ends
  • Quadratic Funding – matching-based funding model for public goods
  • Dominant Assurance Contracts – incentive mechanism to fund public goods
  • Open Science – movement to make scientific research more accessible and collaborative
  • Validated Imagination – the concept of grounding future-oriented visions in plausible mechanisms
  • Para-academia – knowledge generation outside formal academia

Projects & Events