Podcasts

David Deutsch | On Beauty, Knowledge, and Progress

about the episode

David Deutsch, a physicist and writer, has a broad interest in fundamental concepts. He is credited with discovering the first quantum algorithm and, together with Richard Jozsa, discovering the first quantum algorithm capable of solving certain problems exponentially faster than classical computer science.

Deutsch discusses the intersection of beauty, knowledge, and progress in this episode of Foresight's Existential Hope Podcast. Deutsch's innovative work on quantum algorithms and constructor theory is explored, along with his insights on critical thinking, AI alignment, education, and human progress. The conversation is enlightening and thought-provoking, making it a must-listen for anyone interested in cutting-edge ideas and research.

‍

Xhope scenario

A Universal Constructor
David Deutsch
Listen to the Existential Hope podcast
Learn about this science
...

About the Scientist

David Deutsch, a physicist and writer, has a broad interest in fundamental concepts. He is credited with discovering the first quantum algorithm and, together with Richard Jozsa, discovering the first quantum algorithm capable of solving certain problems exponentially faster than classical computer science.

Deutsch has also proposed constructor theory, a concept that suggests all natural laws can be expressed in terms of whether a task is possible or impossible. This theory extends beyond computation and encompasses everything in existence.

Additionally, Deutsch supports the multiverse interpretation of quantum mechanics. This interpretation proposes that every physically possible event exists in an infinite number of coexisting universes. This perspective can have significant implications for how we view the world around us.

‍

...

About the artpiece

This artwork portrays an envisioned universal constructor. It was created by Minh Nguyen (@menhguin), using Midjourney, Photoshop and random ADHD-induced art hobbies. Minh is the cofounder of FridaysForFuture Singapore, worked on a peer support subreddit used by 80% of teenagers in Singapore, and has done policy advocacy since high school. After interning at Nonlinear, he's exploring AI Safety (AIS) advocacy, launching an AIS online university and startup ideas that aid AIS research. Minh is an optimist because "if not, work would be way too depressing".

Transcript

‍Allison Duettmann: Hi everyone! Welcome to Foresight’s Existential Hope Podcast. Today, we have a very, very special guest that I think is incredibly dear to the Foresight community, and it is no other than David Deutsch. I think we have been trying to get you onto this podcast for so long. There are a few people that I really associate with existential hope, one of them being Anders Sandberg, who we had on previously. I think the Northstar almost of existential hope is you. You have written a few really fantastic books, including Fabric of Reality, which is now a bit older, but it has aged incredibly well. It is on the multiplicity of universes and how that theory, combined with evolution, computation, knowledge, and quantum physics, can really explain a new worldview. 

‍

You then published The Beginning of Infinity, which really was a big deal to people in this community. You bank on providing the antithesis to the kind of doom to the modem Murray meme. You are essentially saying, “No, look, progress does not have to come to an end. In fact, we are just at the beginning, and there are really a few concrete ways that we can push progress forward.” Also, a few more abstract and really good mimetic pieces are mentioned as to how we can think about progress, so this was a really great book. Also, chapter nine, Optimist, in The Beginning of Infinity, has really just stuck with me. I think if anyone reads anything like that, it really gets the concept of existential hope across. 

‍

Obviously, you have not stopped there. Another talk very dear to my heart that you have given is, “Why Are Flowers Beautiful?” It is on YouTube. It is a real treat. Finally, you are also the creator of one of my favorite child-rearing philosophies. I do not have kids yet, but when I do, they will be raised under Taking Children Seriously, which is your child-rearing philosophy, a wonderful mindset you have put forward. We also have Chiara Marletto, a Foresight Senior Fellow, who wrote a wonderful book on the Constructor theory, which you both are advancing. We are very excited to have you on, so thank you for coming online. I know I said a lot about your contributions looked at from a Foresight lens. However, please feel free to summarize your perspective on how you got to where you are right now and your life path to help people get a better understanding. 

‍

David Deutsch:

I have never aimed for any kind of global effect that way. Some of the things that I have been interested in have been obviously related. Some of them have turned out to be related to each other, and some have not. I do not think one should direct one’s research, or one’s life for that matter, towards a distant all-encompassing goal because that means if you are wrong, you will not find out until you are dead. All problems are parochial. If they have universal consequences, that is a bonus. We can be on the lookout for universal consequences, but we are on the lookout for all interesting consequences. The main thing is to solve problems as they come up. 

‍

I will give you an example of that.  I was interested in quantum computers. I was interested in the theory of computation more generally, and how that relates to thinking. Much later, decades later, came the idea of an AI apocalypse. It turns out that these other ideas that I had stemming from a completely different context actually make the AI apocalypse look absurd. For a start, if one regards an AGI as something with human-like intelligence but running as a program on a computer, then if one abides by the same laws that humans do, it does not make sense to apply different laws of society to it, and it especially does not make sense to enslave it. Namely, causing AGI alignment by force or building it into the hardware, not that it would be possible, is an attempted enslavement. So, that will not turn out well. Also, I would not have guessed when investigating the relevant ideas initially that it would have any such consequences.

‍

Allison Duettmann: I saw you tweeting about this a little while ago. Do you find your ideas have any foothold or are forthcoming in the AI community? Would you do a concrete thing differently based on these specific observations? Is there something you would particularly point people towards?

‍

David Deutsch: Well, there are various things involved here. I think AI, and recently, for example, PPT and ChatGPT, are wonderful things and can be very useful, which has nothing to do with AGI. In fact, I have written it is more or less the opposite of AGI because it involves honing the program to conform more and more precisely, and in a shorter time, to meet the given criteria. Whereas, the difficulty with AGI, is to write a program such that there is no possible idea for which one can say it will never enter that state or have that idea. Then people say, “How do you know it will not get the idea to murder us?” Well, that’s the thing. That is a problem that has existed since the beginning of humankind, and that was solved with liberalism and the enlightment. We now know how to do it. We know how to bring people up in a society that makes it extremely unlikely that they will become enemies of civilization. 

‍

While we have not gotten it perfect yet, we have got it working amazingly well from the perspective of history. This is seen in the fact that we have so few wars and so little violence, as Steven Pinker likes to point out. It is unprecedented. It is not inevitable. It is not that it had to happen, and it is not that it has to continue. It is just that we have the knowledge, both theoretical and institutional, to keep it going as it has been for hundreds of years, shall we say. If we continue improving it piecemeal and so on, as Karl Popper would have us do, then there is no known reason why it should stop. But it is not inevitable, and it will all depend on what we choose to do.

‍

Allison Duettmann: So if you were in the bit more concrete AI alignment communities, would you advocate for a Taking Children Seriously view for AI to actually bring them up in a certain way?

‍

David Deutsch: Yes. In general, the history of educational theory since the Enlightenment has been one of increasing freedom for children and increasing integration of the values of society in general with those of educational practices and institutions. So that has come together, and educational institutions are kind of the last institutions of western society to take on board liberalism and the enlightenment. Things are taken for granted in schools and universities, which, if translated to society at large, would seem absurd, such as valuing obedience and enforcing ritual behaviors. This is today better than it has ever been. It is still improving, and I think if AGI were invented tomorrow, it would indeed be the right thing to do to educate the newly programmed AGI as closely as possible to the way our society educates children. Of course, I think I know improvements upon that, but I think it would be wrong to enforce my narrow view on everybody. But for everybody to conform to the standards of society at large is not impossible, and to do it for AGI is not impossible either.

‍

Allison Duettmann: So you would almost be arguing for more freedom in the way that we educate AI compared to perhaps what the general cannon in the AI community is?

‍

David Deutsch: Well, I do not advocate this for AIs. For AIs, I am happy for them to be enslaved and be forced to do whatever we want them to do as accurately as possible. In fact, there is a whole field to make sure we do this so cars do not run over people and that sort of thing. That is fine, and the more accurately that is done, the better, but that is not how you get people to be members of a free society. For people, you have to do the opposite in some sense. We have learned slowly and painfully over the centuries to do some counterintuitive things and to entrench those as fundamental principles of the legal system and financial system and everything. 

‍

For instance, we have policing by consent, but 500 years ago, nobody would have understood what that meant. Or Government by the people; nobody would have understood that either. If you had said it to them, they would have imagined some monstrous system which couldn’t possibly have worked. However, society evolved through conjecture, criticism, and cultural evolution to make these things work and for them to become second nature. To throw them away in regard to AGI is terribly dangerous. It is the very danger AGI alarmists are afraid of, and they want to do the opposite of what is necessary. 

‍

Allison Duettmann: Yeah. We wrote a little bit about extending frameworks of voluntary cooperation towards artificial entities. I think it would be interesting to actually see what those could look like in practice. Basically, like many of the institutions that we currently use to cooperate through, but in a relatively consensual manner, as you said, I think it would be an interesting theoretical exercise to think about what those would look like in an AI context.

‍

David Deutsch: Yes.

‍

Allison Duettmann: Very cool. But obviously, you do not only have thoughts on AI. You clearly have an incredible breadth of synthesizing different fields and finding sensible parallels between them. For a young, talented person entering your space, would you be able to give a rough bird’s eye view of what it is you are working on? I also know you said you do not like giving advice. It does not have to be advice. Just from your standpoint, how would you categorize the field?

‍

David Deutsch: Yes, I have said giving advice is not a good relationship to have with somebody. However, I think that getting up to speed is a bit misleading. Although there is quite sophisticated knowledge, if you are indifferent to it, you will waste your time, but not being indifferent to it does not mean getting up to speed. There is no such thing as speed. I think a better metaphor is one you used by my old boss John Wheeler. He said this for physics, but I think it is true for everything. He said, “In physics, every point is a growth point.” Wherever you look, even if something has been known for centuries or just invented today, either of those things can be a point for growth where somebody says, “Why should it be like that? What would happen if it wasn’t like that?” Of course, most conjectures are wrong, but they are the means by which progress is made. 

‍

If I were starting out now - as indeed I suppose I am, everybody is - then I would want to think about the interesting things and think about what might be wrong, what seems wrong, or what I don’t get. Too many people think if they find something they don’t get that there is something wrong with them. That is not true. It is quite likely if you find something you don’t get, then there is almost certainly something wrong with something else. It is either with the people who told you about it, the teachers of the courses, the authors of the books, or the actual materials. Even if the material is literally true, they may be looking at it the wrong way, and your perplexity may be, and in some sense must be, the fact that you are looking at it in a way it was not intended yet has some potential for improving it.

‍

Allison Duettmann: Okay, from your own perspective, have you noticed a cultural shift that was instrumental in your life, either towards your academic career or in your field, that shifted anything? Or on a personal, when were there things where you have specifically updated, for example? Were there any specific moments that really got you to update your worldview?

‍

David Deutsch: Well, I think my worldview has only been largely shaken or shaped once, and that is when I got to understand Popper. However, it has been course-corrected several times, and I suppose the best-known one of those is when I decided to update Turing’s work on the universal computer to include quantum mechanics. That was after I had realized that he had made tacit assumptions in his analysis about physics, and these assumptions were false. What’s more is these assumptions were now being used in things like complexity theory to derive what they thought were mathematical theorems but were in fact consequences of the wrong theory of physics, so they got the wrong answers. I only realized that later, but it turned out that as a result of making classical assumptions, they got the wrong answers for things like what computational tasks are easy and what are difficult.

‍

Allison Duettmann: Yeah, and I think you were relatively successful at going out there and correcting that error, so that is a great embodiment of Popper’s falsification. Popper co-founded the philosophy department at the LSE that I was in, so I have a deep appreciation of him nevertheless. I think I have only gradually come to understand the very critical role he actually played in everyday life over time. I think it is interesting to understand someone theoretically, and then over time, it really sinks in as you continue through life.

‍

David Deutsch: That was very much the case for me too. When I first felt enthusiastic about Popper, my impression of what I thought his theory was ended up very wrong. In hindsight, I was not a Popperian at all, as I had misunderstood most of the things. What I had understood, though, not to put myself down too much, was that the conventional way of looking at epistemology and knowledge was just completely wrong. What I did not understand is how accurately and powerfully Popper superseded it.

‍

Allison Duettmann: Also, you know, Popper often gets talked about in context with Hayek, as they are both proponents of the open society. I do not think I have heard you talk about Hayek much in my previous research, so I wonder if you have any ideas. Were you influenced by him at all, or did you see it as more pop art from the scientific lens?

‍

David Deutsch: I think I have only ever read one book by Hayek, The Road to Serfdom. It was alright, but I did not find anything in there that I didn’t already think must be true. Hayek is basically a right-winger, so in regard to economics, I agree with him but not necessarily in regard to society at large. Also, I think Popper overlapped a lot with Hayek, but there were places where they disagreed. In those areas, Popper was usually right, except that he was to his dying day a leftist, and Hayek was a rightist. However, that only affects their ideas in terms of the color of them, not so much particular policies, which I think Popper was not even that interested in. But Popper’s and Hayek’s take on political philosophy was much closer than the political policies they actually advocated, and that is much more important to get right how one thinks that errors should be corrected, what role an institution should have etc. It is much more important than the actual policies that an institution adopts at any one time. If it can be corrected, then you hope it will be, but if it can’t, then you can’t.

‍

Allison Duettmann: Yes, I guess to that extent Hayek had more concrete ideas about how those should shape or influence society. He has yet to be corrected. Okay, wonderful. That was sort of to satisfy my own curiosity, so thank you. Another question I had is what, if any, relationship is between Taking Children Seriously and other scientific work you have done? What prompted you to look into this valuable realm? 

‍

David Deutsch: I do not think there is a presence of a science of education. I do not think education theory or even educational psychology has the potential to be a science in the future. It is all philosophy. Therefore, for me, Taking Children Seriously is simply the application of Popperian epistemology and, more broadly, liberalism to the foundations of education. So it is rather paradoxical because, in a way, it means it is not much of a change since liberalism is kind of a dominant assumption in our society altogether. It is completely normal to appeal to things like freedom of speech and individualism in society at large. People may disagree in particular cases, but they will not say that is not a way to argue or we do not care about individual choice. However, on the other hand, because of meme theory, there is a strong tendency for antirational memes to particularly manifest themselves in education. 

‍

If you can accept this analogy, it is just like in biology, where the parts of our genome that are most resistant to change are the ones that determine the structure and function of ribosomes and generally of the DNA code. So the DNA code has It has undergone slight changes, with other species having slightly different ribosomes and animals and bacteria having slightly different genetic codes, but it takes hundreds of millions of years. That is because selection pressure on the thing involved in replication is stronger than anything else. In regard to human ideas or memes, that is the education system or educational practices. Now, this is not meant to cause despair because memes are not genes, and we are not victims of them. We can always choose to behave differently and use arguments to decide instead of dark feelings one gets when they do the unconventional thing. So we can, but it is no accident I think that education is the part of society that has been slowest in adopting the values of the enlightenment.

‍

Allison Duettmann: Really interesting, thank you. I also had a question on the chapter on hope that you wrote in The Beginning of Infinity. The chapter on optimism is one I think that really brings the point home in a wonderful way. I think one point you often get as someone coming from an existential hope lens is that it is pollyannish and ignoring the risks. It can seem like you are fighting an uphill battle when backing the claim that there are good reasons for optimism. I wonder if you could lay a few reasons out there?

‍

David Deutsch: Maybe I would say not good reasons for optimism but instead the one thing I have in common with doomsayers is that I do not think anything is inevitable. Human improvement is not inevitable. It always comes down to the choices people make. And there is no limit to the size of errors we can make. We can mess it all up if we make the wrong choices. Subsequently, that conditions how one can have an optimistic world view while being able to combat the objections that you mention you run into. Optimism is not what I call blind optimism. It is not the theory that things will go right when they look like they will go wrong and vice versa. Instead, it is because what will happen depends on our choices and knowledge, and from that there is no reason to give up on any problem. 

‍

Problems are solvable and inevitable. They are soluble since specific types of processes can lead to solving problems as well as inhibit the solving of it. Conjecture, criticism, and error correction are necessary and most precious in maintaining our forward momentum in regard to our ideas. If they are impaired, it impairs everything. Once everything is impaired, civilization collapses as it has before. While I see no sign of our civilization collapsing, there is no supernatural force holding it up. It will be up to us. If everyone decides that progress is in fact bad, an illusion and always at the expense of one group of people in favor of another, then progress will stop because nobody wants it. Once it stops, there is no reason why it should start up again. Historically, it has stopped and started up again, but in these smaller cases described in the book like Athens and Florence, it did not start up again. It was taken onboard by the general enlightenment, but I do not know of any law of nature that says the enlightenment had to happen. I think we should be very grateful it did happen and try to keep it going and improve it because it is still very flawed and always will be as we cannot reach a non flawed state.

‍

Allison Duettmann: Then do you feel that perhaps the biggest risks we are facing right now are the distractions of those institutions of inquiry, conjecture, criticism, and consensus that took us so long to build because we got distracted by other things we thought were of higher risks, and the solutions we are putting forth are actually destroying the things we have built thus far?

‍

David Deutsch: Yes. I am not convinced that the risks proposed by doomsayers are in fact very great. It is because they are so important that we take them seriously, but that does not mean the risk is great. What I can say is whenever our institutions are impaired by some fad, fantasy, or bad idea going around, it is bad, and people are suffering as a result. Every time institutions and traditions of criticism and consent are impaired, people get hurt. People die from it. From the point of view of a civilization as a whole, I do not think it is anywhere near that level of harm. For instance, every child that gets dragged to school is an impairment of the growth of knowledge of civilization. Who knows what has been destroyed by it.

‍

Allison Duettmann: Yes, that is beautifully said. Well, thank you. I will hand it over to Beatrice now for a lot of existential hope questions. Ultimately, I think you have truly changed the way people listening perceive the world in wonderful ways. I also think it shows how people show up and interact with each other in a way that we can hold down critical conversations, and I think if you are not reminded of these reasons every once in a while then it can be hard to do. So thank you so much, and I will hand it over to Beatrice now.

‍

David Deutsch: Good to hear, thank you. 

‍

Beatrice Erkers: Thank you. I am going to ask you existential hope related questions, but I am also curious about this idea talked about now by Toby Ord and the Precipice. We are in a crucial time in history where what we do now has an unprecedented effect on the future. Like Holden Karnofsky mentioned, it is the most important century as we face these unprecedented risks. What is your take on this?

‍

David Deutsch: I don’t think so, first of all. Although nothing follows from this, perhaps it is worth noting that pessimism throughout the centuries, and also conservatism in the bad sense of the word like opposition to progress, has always included the idea that we are facing an usual moment of crisis in which the whole of everything we value is at stake. It has always been false, and I think it is false today. Talking about existential risks, obviously there is a risk that the weapons we have available today could bring down civilization, though it is a bit farfethced. However, they could cause so much suffering that trying to avoid that requires as much effort and attention as avoiding destruction of civilization altogether. But we have those weapons, and the ancient Romans had enough weapons to do that when they destroyed Carthage. The Catholic church had enough weapons to do that when they exterminated the castles. 

‍

Exterminations and destructions of civilizations have happened since the dawn of civilization. Weapons have been used in unprecedented ways since the invention of weapons. If anything, I think the amount of knowledge that exists today - and knowledge is not so easy to destroy as that is explicit knowledge, whereas the knowledge in institutions is relatively easy to destroy unfortunately - is so enormous that it is hardly conceivable that a civilization brought to its knees could not rise again. They would just have to implement the existing knowledge. They would not have to reinvent agriculture or the tractor and fertilizer, they would just have to look at the book, and it would tell them what to do. So I think the danger is not as it is painted; it is completely different.

‍

On the other hand, the danger from nature is definitely less. We have seen in the last few weeks that a whole range of possible destructions of civilization from meteor strikes are not going to happen because technology has just recently advanced to the point where that will not happen. There is still a whole class of possible impact from celestial objects that we do not yet know how to counteract, but a large class of them, and the most probable ones we think, are no longer a danger. Whereas there was a danger of a continental destruction size impact every 250,000 years I think it is, that is now gone. And in that sense, one chance of death every 250,000 years multiplied by 8 billion people is quite a large risk per person per year. So with examples like this, I think existential risks are diminishing.

‍

Beatrice Erkers: That is very nice to hear. That is a message I have not heard in a while. I think a big part of this existential hope project, from our experience, is that it is easy for people to envision dystopian futures versus positive ones. You argued that problems are solvable, and even though problems are inevitable or really hard, it does not mean they are unsolvable. Have you ever thought of any specific visions of the future you feel are desirable? Do you have any visions of existential hope for the future?

‍

David Deutsch: Because of Popper I think, I am constitutionally opposed to the idea of utopianism. Both to utopianism as a philosophy that is the idea that one should try to design a perfect society and work towards it, and also utopianism in the idea of imagining what perfection would look like. I would rather look for imperfections in what we have, which are, as I said earlier, always parochial even though they may lead to something universal. I try to restrain myself from being that guy who says something is wrong on the internet, so I have to fix it. I try not to do that. I look for interesting things to fix rather than what someone says wrong. 

‍

I think in general terms, I would like the future to be one of ever more rapidly increasing knowledge and ever more rapidly decreasing suffering. Not just suffering in the airy fairy sense, but specific suffering we see from pleople dying in plagues, pandemics, wars, and so on. These require a lot of thought, but there is no law in physics that says we cannot solve them. We can, but they require creativity. So I envision the future getting better when it comes to conquering evils we know about but also getting better in ways we cannot possibly know, which would be wonderful.

‍

Beatrice Erkers: Yes, I think I recall you wrote about how creativity is an extremely important tool in gaining this knowledge that you think we need more of. We just spoke about Taking Children Seriously. Is there anything else we can encourage on a societal level for harnessing more creativity, which would enable more knowledge?

‍

David Deutsch: Yes. At the moment, Western culture is suffering from a wave of fads, whose general theme is to oppose Western culture, civilization, and the Enlightenment. As I said earlier, to claim it was fake, did not happen, or happened but it was bad, none of it is true. Much of it is based on factual misconceptions as well as philosophical errors. However, the phenomenon of informing people’s world views is as follows: There are several such things which are sweeping Western civilization. All of them have the effect of inhibiting progress by inhibiting freedom, such as restricting the range of behaviors that are tolerated for humans, restricting speech and communication with more things becoming taboo, enforcing reinterpretations of history or the relabelling of the phenomena of the Enlightenment etc. All of these things are bad and have gotten reactions against them, which I hope will eventually win, be replaced by something better, or so on. 

‍

In this context, I should say, just like I have said before and people have criticized: In science, cranks are valuable. Even scientific publications ought to give some space to cranks. It is not that they are always right, as J.S. Mill said, sometimes they will be. However, even if they are never right, as J.S. Mill also said, you cannot understand the true theory without understanding why the cranks are wrong. I think cranky, moral, and political theories are in the same category. Unlike in science, the danger is that they get into power and suppress progress towards true theories. Yet, they are also a source of problems to think about and apply creativity to. The danger is only that they get into power. Their ideas spreading is not in itself dangerous. Our society is good at not letting dangerous people into power, so let us bear that in mind.

‍

Beatrice Erkers: Thank you so much. There are two more questions I want to make sure I have time to ask. First, you got a question on Twitter today. You had mentioned in The Beginning of Infinity that the idea of universal constructor is flawed. Is that something that you could expand a bit on?

‍

David Deutsch: Yes, but it is not a very important point. It is mainly a manner of terminology. In The Beginning of Infinity, I classified humans as universal constructors, by which I meant there is not any fundamental limitation on what we can build or what transformations of physical objects we can perform if we want to, other than the laws of physics, which are limitations. Now, since then, I have actually tried to develop constructor theory in general, and in particular the theory of the universal constructor. 

‍

It turns out it is really essential in the theory of constructors, just like the theory of computers, to imagine objects that obey their program. So a constructor is first and foremost something that obeys its program. Then you can ask the range of possible programs it can be programmed with and what it can do as a result. A universal constructor is one that can be programmed to do anything possible, as long as it does not violate the laws of physics. Therefore, a universal constructor must be perfectly obedient. 

‍

A human on the other hand, as I mentioned in the beginning of this chat, is almost by definition unable to be obedient. Something which is creative cannot be obedient, and that is a contradiction. Now, you can say the human body is an approximation a constructor. Although the mind cannot be programmed, it has to sort of consent. The body is more or less obedient to the mind. Not perfectly, but well enough to count as an approximate universal constructor. 

‍

There is also the fact that humans are very slow at some things. Whether it is possible, we do not know how to make a real universal constructor yet. Supposing someone designed it tomorrow, it might be something like a computer with a robot. Whether an individual person can build that computer and robot in a lifetime out of ingredients that were naturally occurring, I don’t know. So there are limitations on humans as universal constructors, but as I said, that is really not too important. It was just a change of terminology in the book to a more convenient terminology. It doesn’t mean there are any limitations in scope of what humans can do. We don’t start with naturally occurring things. If I wanted to build a physical machine, I would not begin digging for iron ore. I would go to the store or Amazon and buy the things that are close to what I want to make and just assemble them.

‍

Beatrice Erkers: Thank you, I think it is interesting to hear you expand on it. The second question I want to ask has to do with our purpose of this podcast and inspiring a more positive vision of the future. We always ask for an example of a eucatastrophe, which is basically the opposite of a catastrophe where the expected value of the world is much higher after the event. Could you share a vision of what could be such a eucatastophe? Maybe it would be the creation of the universal constructor or something similar. 

‍

David Deutsch: Yes, I was going to share something like that. I think it will be important, and after the universal constructor is first built, it can then build exponentially more. The human role in production will no longer ever involve unpleasant physical work. Toil will be completely ended by the invention of the universal constructor. Although civilization already has reduced toil by something like 99% from when the human species first evoled. However, I think it will be fairly dramatic by the standards of everyday events. 

‍

Instead of toil, the role of humans will be entirely to provide knowledge, either for its own sake or to program the constructor. There will be extremely sophisticated aides to programming the universal constructor. Just like ChatGPT can take a lot of the toil out of writing the program, and all it really does, as I understand it, is it takes the corpus of all programs that have been uploaded to the internet and constructs what you asked for in the same way it constructs good English sentences. 

‍

By the way, I was surprised at how good ChatGPT is at constructing sentences in proper English. I would have guessed that it would have been decades before AI can do this. AGI of course can do it relatively easily, but I am not sure this is on the horizon. As I have said, people working on this have got the idea that an AGI is just one more eve, and then our AI will become an AGI. I think it is the opposite. The AIs are getting further away from an AGI.

‍

Beatrice Erkers: Yeah, I saw on your blog that you had a bit of an argument almost with ChatGPT about writing a poem. It got it right in the end I think.

‍

David Deutsch: It does. It often gets it right in the end precisely once you have inserted, through your angry objections, all of the knowledge that it needs to get it right. 

‍

Beatrice Erkers: It was a fun read, and I can recommend it. Also, you have mentioned Popper a lot through the conversation. If one hasn’t read anything by Popper, where should one start?

‍

David Deutsch: I am often asked this, and I don’t know. It really depends where you are coming from. Popper was so broad with the subject matter, with political philosophy, philosophy of science, philosophy of knowledge, and so forth. Within those, he addressed problems in different ways. I think the concept that maybe unifies all of Popper’s thinking across all of these subjects, as Matjaz Leonardis recently pointed out to me, is the concept of a problem. A problem in science, a problem in philosophy, a problem in politics etc. And this is what one of my chats with ChatGPT was about. It didn’t know, and I reminded it, that according to Popper the growth of knowledge always begins with a problem. I asked it, “What does the growth of knowledge, according to Popper, always begin with?” It said a theory, a criticism etc. I said no it is a problem, start again, you know. Finally, it did give a nice version on this. 

‍

However, to answer your question, if someone wanted to approach Popper, I would say think about what problems you would like to have eliminated by a much better theory of knowledge than you have. That will guide you to which of Popper’s books, articles, or videos will best make sense to you at first. Then later, you can see the connections with other things. There is a lecture by Popper, “On the sources of knowledge and of ignorance.” Every so often, I go back to read that. It is not very long. I get something new out of it every time. I think it is the best discourse on epistemology ever written. It is incredibly deep and clear. The thing that prompted me to this is that Brett Hall had five videos explaining this lecture by Popper. He ended up saying I do not think anyone will want to spend five hours listening to my video, but it is worth it. You can also read the original, which is nowhere near that long.

‍

Beatrice Erkers: That is a great recommendation, and I think we can link the talk in the podcast when we post it. I want to echo what Allison has already said. We are great admirers of you at Foresight. We are very happy you came on this podcast, and I am looking forward to see what our AI image generator will make of your prompt for the universal constructor. Thank you so much for coming David.

‍

David Deutsch: Thanks for having me.

‍

Read

RECOMMENDED READING