In this episode of Foresight’s Existential Hope Podcast, our special guest is Kevin Kelly, an influential figure in technology, culture, and optimism for the future. As the founding executive editor of Wired and former editor of Whole Earth Review, Kelly's ideas and perspectives have shaped generations of thinkers and technologists.
Join our hosts Allison Duettmann and Beatrice Erkers as they delve into Kelly's philosophies and experiences, from witnessing technological shifts over the decades to fostering optimism about the future. Kelly shares details about his latest book, a collection of optimistic advice in tweet form, and talks about his current project envisioning a desirable hi-tech future 100 years from now.
He also discusses the transformative power of the internet as an accelerant for learning, the underestimated long-term effects of being online, and the culture-changing potential of platforms like YouTube. If you're interested in the intersection of technology, optimism, and the future, this podcast episode is a must-listen.
Kevin Kelly (born 1952) is the founding executive editor of Wired magazine, and a former editor/publisher of the Whole Earth Review. He has also been a writer, photographer, conservationist, and student of Asian and digital culture.
Philipp Lenssen from Germany has been exploring between technology and art for all his life. He developed sandbox universes Manyland and wrote a technology blog for 7 years. He's currently working on new daily pictures at Instagram.com/PhilippLenssen
Allison Duettmann: Hi everyone, and welcome to Foresight’s Existential Hope Podcast, where we try to get wonderful scientists and thinkers that are extremely optimistic, or at least differentially optimistic, about the long-term future on to tell us why and explain what they are working on to make our great futures more likely. I think there is hardly anyone better to have on this podcast than Kevin Kelly. Kevin has been almost a sort of artifact of positive future thinking and has been for quite some time. I may be wrong on this, and I tried to find this photo back but I could not; I think I saw you in very, very early Foresight gatherings as well with Stewart Brand and a few other folks too. It was one of the early parts when I was still in London and looking through the Foresight archives, and I think I saw your face.
Anyway, you are a founding executive director of Wired, and you are really like an editor and publisher of the Whole Earth Review, which many people in the Foresight community really, really loved. You also take absolutely incredible photos and recently published a wonderful photo series on Asia. And you are here because you are extremely optimistic about the future. You have given a TEDx talk on this, but you also published a recent excellent book coming out in May, right? It is really about optimistic advice for individuals, and it is in tweet form. So, I am really excited to get into that as well. You have also done so many other things, including a “Future of X” video series. You have a wonderful blog, The Technium, and you wrote wonderful blog eBooks before that, such as The Inevitable. I think I am just super excited to be here.
Fun fact, on the site, we founded a whole Burning Man camp on the notion of “prototopia.” We call it this, not protopia, because we came up with the term independently, so we Googled it, and obviously, you had already originated the term. So thanks a lot for joining us; we are really excited. I know I said a bunch of different words already, but maybe to kick us off, you could start by telling us what you are working on. Could you perhaps share your life story in a few minutes of how you became the person that you are right now? Big question.
Kevin Kelly: Yes, so many questions; thank you! Again, I appreciate everyone’s attention here. I know our attention is our one scarcity. Thank you for taking some time. I hope I can be useful to you. So, for my qualifications, yeah, I am off-the-chart optimistic. I can actually be more optimistic as the years go by. I am more optimistic than I was even 10-15 years ago. My optimism is partly temperamental, but I have actually deliberately been engineering my optimism and becoming more optimistic on purpose. Actually, I realized recently that the old division of liberal, conservative, democrat etc., does not interest me or work anymore. I am more interested in the fundamental framework of those that are problem-oriented, and that is the perspective they see in the world. Whereas I see the world in terms of opportunities.
There are some people that are like “The world is screwed up, and there are so many problems, so we are going to try and fix them.” But I feel as though the world is full of opportunities, so I do not want to focus on the problems but the opportunities. That is where I am. Anyways, I have this little book, and the honest answer is I am doing a promotion for that. However, the other thing that I am working on, which may be of interest, is I am working on what I call the “100 Year Desirable Future.” It is a set of scenarios about a future 100 years from now that is full of hi-tech.
It is the future I want to live in. I am trying to imagine a world that has all of this stuff, that most people are crazily afraid of. I want to make a world version and imagine one that I would want to live in, then have 10-year increments to detail a way to get there. It is not a prediction, and I am not suggesting that this is inevitable in anyway. Instead, I think this is my attempt to counter the general, very dystopian views of the future that individuals have, particularly for storytelling. So the purpose of this scenario is to actually have a world to write stories in. Because the world itself could not be boring.
Allison Duettmann: I love that. Have you heard of the AI Worldbuilding that Future of Life Institute did last year?
Kevin Kelly: No, I did not know about that. Please tell me about it.
Allison Duettmann: It is a really wonderful project. I was one of the judges of the scenarios which is probably also why I am biased. They solicited a dozen AI world builds in 2040, with a bunch of different timelines, almost like prediction market level timelines. Those were distinctly positive worlds. Not utopias, but positive worlds layered by AI, with a bunch of other technologies too. They had people write true stories of the life, create personas, and so forth. I think they are pretty serious about continuing this sort of scenario planning.
We just had our Existential Hope Day, in which we basically group people according to individual, technological expertise domains and had them create existential hope scenarios, with specific timelines also there. I think it is a really wonderful thing to do. My question for you is are you creating one specific scenario or several different, more robust strategies? Is it one or several?
Kevin Kelly: I am doing the wrong thing you do with scenarios. Scenarios should always be in multiples, and I am writing one. That is how I am starting because it is not a group project. I think if I get to the point where I feel more comfortable about what it actually is, then at that point, it would make sense to have multiple variations. The one thing we were always taught and learned and practice with scenarios was to map possibility space and not to focus on one, but this is a little bit closer to a work of art. It is not quite a narrative, but there is a narrative aspect because there is only a single scenario.
Right now, it is just me, my assistant, and a researcher. One of the things I did was ensure we researched all of the official futures. Official futures is what we call the sort of standard industry futures. The future of sports stadiums, for example. We went through and tried to find any industry that had a long-term, more than 2 years, future forecast. We tried to gather them together just to see what the official future is. Now, they are always wrong, but it is a good place to start. Also, there are almost none of them that take on anything past 10 years, so 100 years is beyond the possibility space entirely. However, that is certainly one of the places where I was beginning. What people expect gives us a place to start, even if it is not what is going to happen.
Allison Duettmann: I love it. Please let us know when it is up and running and when there is anything to read or share. I am sure many folks in our community would be dying to get their hands on that stuff. Very cool. You have been around for a while and have seen many different technological shifts and opportunities. Christine Peters made this great point on a panel the other day that technologies are coming, sometimes not as fast as we thought, at least the early Foresight community. So I am super curious from your perspective, what are some major cultural shifts, in terms of long-term optimistic future thinking? I am sure you have seen some ups and downs. Any major things that influence the way you think about the long-term future in these 100 years?
Kevin Kelly: Sure. You know, yeah, I have been around a couple cycles of high expectations and maybe disappointment. Jaron Lanier actually showed me his VR back in 1988 or 89. I was really blown away and thought the next 5 years would be transformative with VR sweeping the world. Now 30 years later, the quality of the VR is actually not that much better, albeit a million times cheaper. So, I was wrong about that. It goes back to the famous quote that we should not confuse a clear view of the future with a short distance. You can clearly see where you are going, but that does not mean it will take a short time. That is just a specific example.
I think right now, we have been through a few generations of AI. We are now in the current moment where it seems there is a little quantum leap, but I still think we are pretty far from even starting that. So in terms of what I have learned, one thing is that we overestimate what happens short-term and underestimate what will happen in the long-term. For example, what are the effects of being online 20-30 years from now? I try to convey to my kids the incredible poverty of information I had growing up. It is impossible to imagine how hard it was to find out about things. It was so difficult, and that was what was so genius about the Whole Earth Catalogue. It was this portal to find out some of this known knowledge that just was not accessible. That was also the subtitle, “access to tools.” So I think the state we are in now of assuming we can reach out and get information we need for anything has accelerated learning to the extent that we do not appreciate it.
That change is hard to even measure because anything we want to do from being an activist to try and invent something to changing your mind is now at least a million times easier, so that has really moved our culture and is accelerating everything. You know, again, I am a total YouTube fanatic. I can talk about it and I think it is highly under appreciated in terms of the accelerant it is helping culture. It is not just makeup tutorials or people in workshops but all the way to brain surgeons posting techniques which are then adopted by others and improved etc. I think that is a fundamental example that we have yet to truly acknowledge and such instances show big changes in the culture.
Allison Duettmann: Yeah, I agree. It would be nice to make people aware of how hard it used to be to find something out. I remember when I was driving to Amsterdam, I already had Google Maps. My dad wanted to literally give me a physical map and mapped it. Even just that time when you would have to ask around about how to get to the next city has developed and is much easier now. But yeah, I second the YouTube point. We have lots of technical seminars here at Foresight, like I am in biotech and neurotech, and so forth. I often get emails from people saying that this is so much more valuable than stuff I learn in university because it is people talking about how hard it is to build a company in this phase, using research to explore, and so forth. So I think this gradual undercurrent of education we get through YouTube, which is tailored to what we want, is very special.
Kevin Kelly: So the obvious question to ask ourselves is, if that has only changed in 20 years because of information technologies, how might it change in the next 20 years? What kinds of things might we expect in the next just 20 years alone? I also think one of the things happening right now with AI, which I find very exciting, is this moment of realizing AIs can make stuff up and wondering how we trust them.
I think it is a very profound disturbance in the force because it sort of requires us to up our game and develop new tools, as well as decide what is true or not. Just generally, how do we ascertain what we can trust and not trust? How do we determine that something is true? Right now, we kind of do it in an intuitive way, and we have experts, but that will become much more mechanical and embeded. To do that, we need to know a lot more about how we decide whether something is true or not. For instance, we have to have it source cited, and then how many sources down should we go before we are accepting that etc.
Anyway, I think we are moving through this epistemological frontier because we have AIs involved. Humans sort of accept things, but for AIs to accept it, we need a lot more precision and fundamental apparatus to make that happen. So I think it is going to be a very messy journey for the time being, which is also very exciting because there is an opportunity to develop and increase our ability to learn by developing these questions of how do we get to something truthful faster? How do we get that consensus faster? I do not know what that is yet, but I think these little glitches of doubt that come up from AI chatbots about whether or not we can trust things are not a problem but an opportunity.
Allison Duettmann: Yeah. Look, when you just mentioned how much information is on the internet, I think the next iteration is what Sam Altman pointed out about ChatGPT. He was saying I do not need to read anything anymore because ChatGPT can give me a summary about it. That has good and bad consequences. However, there is this thing in human ethics called the reflective equilibrium where you take your intuitions for a bunch of situations and construct heuristics for them, and then you update your intuitions or principles based on whether they still apply to the new situations you encounter or other objections.
For me, I did that on the trolley dilemmas in ethics undergrad. It took me a whole year 120 dilemmas or something. I just did the entire reflective equilibrium across trolley dilemmas using ChatGPT. It spat out every single other dilemma I had not considered yet, what other philosophers thought about them, different biases it could be prone to etc. It literally gave me what took me a year or something in undergrad ethics to dive through papers. It gave me the same references on a silver plate. So it is not just for epistemic updating but moral updating. I thought it was truly incredible, and thinking how we can make use of these massive amounts of data, even though some of that may not always be truthful. I am curious as to how you see this human AI evolution in the next 5-10 years. Do you see human AI symbiotes, humans assisted by and collaborating with AIs, or more worries? What is your general path here, if you have an idea?
Kevin Kelly: I think the general framework is that there will not be a singular AI but hundreds, if not thousands of different species of AI with different mixtures and complexes of thinking. They all will be aliens. For me, they are artificial aliens. They can range from something pet-like or animal-like, and then there are ones that are very highly complex and sophisticated, but these will always be a little different from us. It is like Spock. He is a little more intelligent than us, but he is a little different, and that is the value. That is why we are making these things because they are not exactly like us, and they think a little differently.
So for me, it is kind of the centaur symbiote, as the general stance. And the idea of them taking over, I totally reject that stance. In my view, I think it is a misunderstanding. And I have written about why I think it is a myth and that singularity paper clip stance. I think that is a misunderstanding of what we think intelligence is, as far as we know. Also, I think the long range thing is having a network of thousands of different species of AIs. Many of them will be invented to solve problems our own minds cannot solve alone. Together we can solve more. In some way, the more varieties of minds you have, the more things you can do.
Also, the idea of general intelligence to me is a misguided idea. I do not think there is such a thing, but more so a possibility space, and that actually human-like intelligence is more on the edge. We are very specific intelligence that evolved on this planet to solve certain problems. Once we meet other kinds of intelligence, we will realize it is like having a tool or machine. We do not make general machines to do things but very specific ones to do things. Ones tailored to something specific, in particular, are always better than the other things. We have an iPhone that does things really good, but a specific camera is still a bit better. We accept the little trade off of having a Swiss Army Knife version of things, but it does not mean the iPhone is superior to all machines. It is not. There is always an engineering trade off, which is you cannot optimize everything. You always have to have a trade off, and that includes intelligence.
So my general view for the immediate future is that these chats and the neuronets will take the stance of intern. Basically, what we are getting are personal intern. So for the first time you have an intern, and you want to check their work versus trusting them on their own. The intern is going to help you write lyrics, or write code for you as a first draft, or make a summary etc. The universal intern will be helping us do whatever we want to do, which is huge. Millions of people will have millions of interns at their side, helping them do work, which is a huge step forward, and that is my excitement of the current thing right now.
Allison Duettmann: Yeah. From our section in the chat, David pointed out that computers are general and can run any program, and it shows there are places where generality is default or optimal. David Deutsch even came on here earlier, and you know he is advancing the universal constructor. I was wondering about your thoughts on this. Then I have one more question, and I will hand it over to Beatrice to get talking about your book, but this is too interesting to let it go.
Kevin Kelly: Right. One of the fundamental theorems of computer science is the equivalency of the Church-Turing Thesis, which says that a computer can emulate any other computer, given infinite time and space. That is the difference is that real computers do not have infinite storage or space. They have to operate in real-time, so in theory, there is equivalency, but in actual practice, when you have to run a theory on a substrate, the substrate matters. Yes, you can emulate things with enough CPUs, but the difference between these emulations is that you have to have a simulation. You have to cheat somewhere and eliminate things in order to emulate it, or it will be slower. That slowness makes a difference in real life.
We operate in real-time, so the theoretical is that any computer can emulate anything else, given enough time and space. However, in real practice, the substrate the intelligence is running on is going to make a difference of how it thinks and works. Yes, you can emulate it over time slower, but that is not how we operate. What that means is if you want to have a computer that really thinks like a human, you are going to have to gradually move to running on wetware. We can do that, but it is so much easier to make them human in 9 months without untrained labor, so there is really no reason to make human-like intelligence since it is much more useful to make intelligence, not like us, and it is easier to do because it is running on different hardware. So the Church-Turing Thesis is true but not that practical, in my opinion.
Allison Duettmann: Thanks, and to your notion of collaborating with alien intelligence in a mutually beneficial way, there was an interesting LessWrong post on “Cyborgism”, basically taking the bits in which they are not human aligned but in which they are different than humans actually, which is almost like their comparative advantage and trying to see what we can learn from them.
Kevin Kelly: Exactly.
Allison Duettmann: I also really loved your AI human comparison and the larger economy. It echoes something Eric Drexler wrote a little while ago on comprehensive AI services, where he was basically comparing AI as the economy getting better at producing very specialized services we can use to increase our own capabilities and be in cooperation with. In those scenarios, there are even problems like AI deception for humans or even problems like collusion, and I wonder if you are worried about that at all. I think you already said you are a bit worried about deception. Yeah, that will be my last question on this. Even in this more decentralized world in which it just continues evolving the way it does, it is much more enhanced by these intelligences.
Kevin Kelly: So one thing is I am not worried about it very much. I do not worry because that focuses on problems. For me, it is all about opportunities. It is an opportunity to increase our understanding of truth and how we know things. Also, I think the bulk of AI is going to be served as a utility service. It will not be generated by yourself, but most will be sent or delivered to you like electricity. You will buy as much as you want and consume it. In the same way, we are not generating our electricity; even though we may have a backup one or solar panels on our house, it is a commodity and utility. AI will be much the same way for most of it, say 90% of it.
Plus, it will be invisible; much of it we will not see. It will be in the back office doing stuff, and it will be successful because the mark of a successful technology is it disappears. We are not aware of it. Behind the walls of my house are the plumbing, electrical, and infrastructure stuff that I am not even aware of, and I do not want to be. I think AI will be much of the same thing, operating in the background. That is where the bulk of it will be done as a service and utility.
Allison Duettmann: Wonderful, we will leave it at that. I am excited to hand it over to Beatrice to discuss existential hope and long-term scenario bits as well as parts of your book. Thank you so much. This was quite the wild ride already.
Beatrice Erkers: Yes, I think it was quite the wild ride already. It was definitely a great taste already, and I feel I already got the answer that you are very optimistic about the future, which is something I usually ask people. I think I came across you first when I read your book The Inevitable, which is about these technologies you feel are inevitable and will arrive and shape our world. I was also at the Future Forum last year and saw your debate about technology being deterministic. I think you were arguing that technology is deterministic. Do you want to pitch this briefly?
Kevin Kelly: One thing I would say about it is I am a reluctant technological determinist in the sense I did not start off there and am not happy I am there, but I am that. I have become convinced that there is a general developmental sequence of technologies that are mostly governed by the physical nature of things. By the way, because of seeing technology as the extension of the same organizing force that runs through life and evolution, it is basically an accelerating force of evolution. I also see a much smaller camp in biological evolution, thinking if there are directions in it itself. That’s not the Steven J. Gold version if you replay the tape of life, you get everything completely different.
I think if you replay the tape of life under the same general conditions, you will get some of the basic forums the same. For instance, in the solution of a quadruped on a planet of our size with gravity, you are going to have them again and again, but you will never have a zebra again if you rewind it. Species are unpredictable, but larger forms and basic blueprints of quadrupeds are going to repeat because that is a physical solution. I think there are lots of similar things in technology. Once you have discovered electrical currents and signal capacity, you are going to make telephones on any planet, under any political regime throughout the galaxies. That would be a common pattern.
So my argument is 1, that if you study science, the idea of the simultaneous independent invention is the norm. The things we come up with are not dependent on some heroic genius. They are networked ideas coming about because the ideas next to it have been implemented, and that is the logical thing. For instance, two people patented the telephone on the same day. Elisha Gray would have invented the telephone had he been there a few hours earlier. So these inventions are networked systems-assisted inventions, so they have a sequence. There is a sense that the larger forms are inevitable if the other parts are already there. They will come out with our entrepreneurial interests, and they are not dependent on a Steve Jobs or Thomas Edison. Edison was the 23rd person to make an incandescent lightbulb, not the first, but he figured out how to make the business side work, including marketing and durability. There were a lot of people working on it at the same time, which is the norm. That is one argument for determinism. There are others, but I do not want to get to deep into that.
Beatrice Erkers: Yeah, that is great. I definitely agree. It is an interesting experience right now where everybody is talking about AI, and not just people in this sort of bubble where everyone is thinking about technology, but instead, it is starting to trickle down and showing up in these little places in our everyday lives.
Kevin Kelly: It does remind me so much of the internet beginning. I have been online everyday since 1982. For 10 years, I was living online before the web came up. It was a very small number of people that thought it would become mainstream. It was a teenage boy phenomenon. Many smart people said the average person would never go online or buy anything online ever. It was a very fringe thing, and obviously it became the mainstream, and same with AI. There has been this feeling that it is not for me and very esoteric, and I think we are at this moment like 1993 or something, when people began to get it. It is a similar moment right now of moving into the mainstream. Now this language interface is making it accessible where people can just talk to it. People are kind of realizing that it is here even though it is not here.
Beatrice Erkers: Well, I think your analogy of AIs as an intern is kind of good. People will have a good advantage if they are trained to make use of these tools, and that is what I am trying to recommend to my siblings who are 15 and 18, like just use ChatGPT. That leads us to your life advice book, which you have just written. What is the story? How did you come to write that book?
Kevin Kelly: Yeah, so when I am trying to change my behavior, I find I like to have a mantra or something to repeat to myself as a reminder. As a result, I like to collect quotes that were practical and would remind me of that. Then I started to find that I would like to take a whole book of advice and reduce it into a little 140 character message that I could use to remind myself. For example, a great piece of advice was I ask myself, “Would I do this if it were tomorrow?” It is very easy to kind of say yes, but thinking of it this way helps. I use it all the time when something sounds interesting, and that is the sort of filter. Another example would be if I know I have something in my household and I cannot find it, when I do find it the advice would be, “Do not put it back where I got it but where I first looked for it.” So I repeat that to myself all the time. So those are the sorts of practical things that I got into the habit of. There were some of them long ago that I wish I knew about, for instance the most valuable thing we have is our time, so the greatest leverage we have is in hiring other people’s time to get something done. I wish I would have known that when I was in my 20’s because I really thought I had to do things myself. You can outsource because my most valuable asset is my time, so if I can get some of your time that is a high leverage. This idea now helps, and I use things like Upwork to leverage my time.
So taking some of these ideas and reducing them into little bits of advice are things I wish I knew when I was younger. I decided I wanted to give them to my kids, so I started writing them down, and my assignment was to make them as short as possible, almost like a tweet that could be tweeted to a friend. I posted some of them on my birthday one year, and the ricocheted and went viral, so I kept going until I had a whole book for my kids of things that are useful, practical, and things I wish I had known when I was younger. We can open the book up almost anywhere. Here it says, “Your best response to an insult is ‘You are probably right’ and often they are.” Or “Any significant successes take at least five years, so budget your life accordingly.” I wrote this one because I feel most really good ideas, from when you first have an idea to when your done, will take about 5 years, and you only have so many of those in your life.
Anyways, there are lots of things like this. “The hard part of predicting the future is to forget everything you expect it to be.” That is the hardest thing is forgetting our expectations because it will be surprising. It is also really hard in that sense to do scenario workshops to get people to forget what they already think the future is going to be. Anyway, I go on, and there are 450 of them.
Beatrice Erkers: How would you go about that then? If you just found this favorite advice in your book and wanted to implement it into your life, do you just write a sticky note and keep it with you?
Kevin Kelly: No, I try to make it into these little aphorisms and proverbs I can remember when I need to. Okay, for me as a writer, there are lots of advice that I just repeat to myself. First of all, you want to separate the write from the editor because you cannot be both at the same time. When you are writing, you have to not be judgmental. Do not let the editor anywhere near your writing as you are writing the first draft. You want to separate those functions. Or even when I am painting; I create a piece of art each day. I do not want the gallerist judge over my shoulder while I am creating. Once you have done it, then you go back and think what you want to change, again and again, but you have to separate those. If an editor is over your shoulder, it will not let you get very far. So when I sit down to write, I remind myself to not let the editor near the first draft, and that first draft stuff is apart of the process. So these thoughts reduced into those tiny tweets was the handle I needed to bring them forward when they were needed.
Beatrice Erkers: I skimmed through some of the advice, and I can definitely recommend it, as it is a really wonderful book. I also see people in the chat saying they are going to order it. I think one thing I wanted to ask you, because you already sort of said you frame everything in a positive, optimistic framework, was whether or not you have always been this way? Was there a particular experience that made you excited and hopeful about the future?
Kevin Kelly: Well as I said in the beginning, I have a natural, funny temperament. I would not have described myself in terms of optimism before. I have learned optimism, let me put it that way. I actually learned to be more optimistic than I even am naturally, and I learned it because I feel as though the evidence shows I am a better person, more useful, and it is my job in life when I am optimistic. So I have become deliberately more optimistic, even though I am naturally funny in terms of my view. There is a great book called Humankind, I forget the name of the author though, but I think more people should read it.
Beatrice Erkers: Rutger Bregman, I believe.
Kevin Kelly: Right. His premise was that the default for human behavior is actually altruistic and selfless. Kindness is actually the default, and he has evidence that even in times of disaster and crisis when people should be very selfish, people tend to be much more than not. Times when people are deeply selfish, mean, or unkind is actually not the default. I believe that because that has been my experience growing up and travelling the world. Instead of going to college, I went to Asia, and I mean I went to dangerous places, places where I could have been taken advantage of, places with total strangers, and I had none of that. It was a totally uplifting experience because I was trusting of other people and they gave me their best back.
That is another little piece of advice: Even if you were ripped off or cheated because you trusted someone, that is a small task to pay for the otherwise incredible generosity that people will respond with if you trust them. So overall, I have gained far more, and even if I was occasionally cheated because I trusted someone, that is a tax I would happily pay for the tremendous amount that I have been given back. There is something weird about the universe and the way it is constructed to where the more you give the more you get back. It does not make any sense at all, and that should not work, but let me tell you it is so reliable. The arithmetic does not work, I do not know what it is, but there is something about that. Anyways, that is my experience. My experience throughout life has been to trust people, be optimistic, and believe something good would come out of this, and life has treated me very well because of that. I do not know why it works, but it works.
Beatrice Erkers: Yeah I believe you. Rutger Bregman also wrote a book called Utopia for Realists, so I think we should really have him on the podcast also because that is something we are thinking about also. I also want to ask you one more question before we run out of time here at the end. One thing we talk a lot about here is the potential of a eucatastrophe, which is the opposite of a catastrophe where the expected value of the world is much higher after it happens. One thing we also comment on is that people assume it connotates to catastrophe, so you have already proven to be a good wordsmith, do you have any suggestions for alternative names?
Kevin Kelly: I have not heard of this before, but that is fabulous. I love the idea. I will think if there are alternatives to it, but I think the concept is beautiful. Let me think about that. Exactly, cascading good. They sometimes call it a virtuous spiral, and it is kind of like network effects a bit. It is this idea of cascading goodness, and the current term is eucatastophe?
Beatrice Erkers: Yes, eucatastrophe. We actually had a bounty contest, so we have a lot of proposals. We just need to agree on one.
Kevin Kelly: Yeah, that is hard to say. It would be better to have something easier. What are some of the other candidates?
Beatrice Erkers: Efflorescence was the winner of our bounty competition. I think fantastrophe is one I am hearing a bunch as well.
Kevin Kelly: Okay, well that is a great assignment. I love that, thank you.
Beatrice Erkers: So another thing we do is I will ask if you can think of what a eucatastophe would even would be because then we will try to use it to prompt it to an AI art generator to create an art piece as a way to visualize a positive event of the future. Do you have any ideas?
Kevin Kelly: You know, if we make contact with another civilization, that would be a good example. But who knows if or when that will happen, so we can create the equivalent artificially. We will make artificial aliens, and I think somewhere in there, once we get to that point, would unlodge and generate this sort of cascading goodness. Another one might be fusion if energy became really cheap. There may be some interesting things that would happen and cascade off of that. Or even something related to telepathy. I just went down to neurolink to see what they were doing there. Oh my gosh, that was pretty cool because they are farther along than I thought. There may be something with mind-to-mind communication. Ramez’ trilogy is a little scary about that; nonetheless, there are good things that could cascade off of that. I want to thank you all very much for your attention and the opportunity to rant about my optimism. I really appreciate what you all are doing, so thank you for having me.
Beatrice Erkers: Thank you so much for coming by.
Recommended Reads Mentioned: