Robin Hanson | On Futurism & His Best Career Advice

about the episode

Is Robin Hanson feeling hopeful about the future? What future does he want to see? How big can the future be? In this episode of the Existential Hope podcast we ask Robin Hanson about his reflections on the future, and we also collect his best career advice.

Hanson is associate professor of economics at George Mason University, and research associate at the Future of Humanity Institute of Oxford University. He has a doctorate in social science from California Institute of Technology, master’s degrees in physics and philosophy from the University of Chicago, and nine years experience as a research programmer, at Lockheed and NASA.

Robin speaks on:

  • How people tend to think about the future
  • His Grabby Alien paper and the Fermi Paradox
  • What excites him about the future (hint, it's mainly how big it is!)
  • About his career as a researcher within multiple fields
  • And much more!

This episode also features an NFT artwork inspired by Robin's eucatastrophe prompt.

Xhope scenario

A High Tech Future: Cryonics, WBE, and Interstellar Colony Ships
Robin Hanson
Listen to the Existential Hope podcast
Learn about this science

About the Scientist

Hanson is associate professor of economics at George Mason University, and research associate at the Future of Humanity Institute of Oxford University. He has a doctorate in social science from Caltech, master’s degrees in physics and philosophy from the University of Chicago, and nine years experience as a research programmer, at Lockheed and NASA.


About the artpiece

Yann Minh is one of the first French artists who built immersive multimedia installations. Born in 1957, he has been a video, multimedia, digital and new media artist since 1979. He is also a writer of cyberpunk science fiction, and founder of the Noonautes, a neo-cyberpunk movement. A child of television, comic strips and science fiction, his work is inspired by Nam June Paik, Giger, Druillet, Moebius, Philip K. Dick and Marshall McLuhan.


The transcript below follows a question and answer format, highlighting the X-hope podcast special. Allison Duettman and Beatrice Erkers of the Foresight Institute sit down to co moderate a session with guest Robin Hansonn. The podcast aims to ask guests what their current professional career entails, what motivated him to reach this present point, what hopes or risks he sees for the future, as well as what advice he might have for fellow listeners. In this podcast, Robin Hanson is an Honorary Senior Fellow at Foresight, who is a well-known author and innovator. Being knowledgeable of many fields throughout his career, Robin Hanson sits down with us and shares his thoughts.

Allison Duettmann: Hi, everyone! Welcome to Foresight’s Existential Hope Special Podcast Episode. We have a few people here with us today, as well as a wonderful speaker, Robin Hanson, who is in the interviewee chair. It’s definitely not your first time, Robin! Thank you so much for coming back. You’re an Honorary Senior Fellow at Foresight now, and you’ve been at Foresight much longer than I have been.

Robin Hanson: I’m old!

Allison Duettmann: No, not at all. You’re just very experienced! 

One of the first things I found in our archives when I looked was that you and Mark Miller led a prediction market in our 1999 member gathering. I think people voted by sending checks to our offices afterwards. I have a few of them locked away, but we should eventually break them out again. You had pioneered prediction markets, but that was not the only thing. You were also quite known for your full debate with Eliezer Yudkowsky on different take off themes for AI. In addition, you wrote “The Age of Em” as well as “The Elephant in the Bain,” which I think became a medical success. You really point out a few of our biases, such as how we do the things we do and why. You also have a wonderful blog called Overcoming Bias. All of these do not even touch upon your academic career yet. 

Then, you wrote another paper on Grabby Aliens that has received a lot of attention without it being published yet, so I am really excited.

Robin Hanson: Oh, it’s published. Grabby Aliens was published in The Astrophysical Journal, which is a top astrophysics journal.

Allison Duettmann: Oh, I had read a part of the draft for the longer book. That one is not out yet, correct?

Robin Hanson: That is correct.

Allison Duettmann: For people that are interested, the website is grabbyaliens.com They can learn more about the concept. There is a video on there, a paper, and so forth. So, I think people are understanding that you are a man of all trades. As we get into the questions, your answers may be more contextualized as you can think of a variety of different things. Thank you for being here. Let’s get started. 

What are you working on? What got you on your path? I think you have a rather interesting life history that should be relevant for younger folks entering the space.

Robin Hanson: I am pretty opportunistic about my intellectual strategies. I changed fields several times. I started in engineering, then I switched to physics. After getting my masters in physics, I went to philosophy of science. Then, I went into computer science for nine years, working with Lockehead and NASA doing Bayesian statistics and AI. Afterwards, I went back to school for social science at Caltech, to complete a two year postdoc in health policy. Then, I got my job here as a now tenured professor in economics at George Mason University. 

I have worked on information, prediction markets, and a wide range of other theoretical topics, alongside these books I have written along the way. I try to look for a lot of tools as well as things that are neglected. I hold myself to the standard of staying in something long enough to make progress, staying long enough to make a real contribution in that field. I will defend that standard with each topic I have focused on. I have not seen many of you in the audience for some time. Welcome and welcome back. It is nice to see you all.

Allison Duettmann: Wonderful. Perhaps, we can dig a bit deeper. Maybe we can give a bit of an overview of you entering one of the areas you are focused in right now. If you could give a quick birds eye view of one, that would be interesting. 

Robin Hanson: Foresight is related to technology. You are tracking it, seeing it is improving, and wondering where it can improve the most. Technology in general is just an arrangement of things that is yourself. Its concept also applies to social arrangements, which was a key idea that engaged me long ago. You can create innovation in social arrangements as well as in virtual software. It seems like there is enormous potential there, and it would be easy to some extent to do that. That is what tempted me to become a social scientist in the first place. Something I did not notice early on, which I find important, is that the reason it is easy to find big wins is because they sit there for decades not being taken. Whereas in computer science, for example, they see it and they take it. Then, the next person has to beat that to get a win. The lesson is that for social innovation, it is relatively easy to come up with things that are better. The hard part is getting people to care, such as getting someone to try your idea out. 

However, academics will write about the ideals and explore an idea, but then they drop it. If you actually want to have an effect on the world in social innovation, you have to find someone who will take on the messy details. There must be an environment where people are willing to try things, learn, repeat, and eventually succeed. That is sort of the general recipe for innovation. A lot of people like to sit and talk about a lot of great ideas for how the world should be approved. However, you lose interest when it comes to a concrete trial of something because that is boring and messy. You would rather give talks on it, join a thinking group about it, get a major in it, write a paper in it, etc. Those are the things that engage people. However, the thing that is missing is doing some trials. 

Allison Duettmann: Yes, I think I remember you once mentioning the example that the prediction market would get picked up, but that would mean accountability for a few people. I feel as though part of what motivated you to write “The Elephant in the Brain” was this realization of what we say versus what we actually want.

Robin Hanson: Well, certainly seeing the contrast between the kind of innovations people say they are interested in versus the ones they are actually interested in putting energy into try is a data point about how the world seems. It pushed me to think of hidden motives. We are not self aware in our motives, as well as in respect to social institutions.

Allison Duettmann: Cool, we will leave it at that. Now, to make it harder: What are any exciting cultural shifts that may make people change their mind? Maybe, if you dare, you can answer it in the realm of AI, or even social sciences, or economics. 

Robin Hanson: Well, look, the world of ideas follows a lot of fashion. If something has been proctored for a while, the world is itching to change minds and make something else popular. There is a new generation looking to make themselves something else, so there's this demand in the intellectual world. People are always trying to guess what this may be. I think you are asking me for a change that I can endorse and praise. 

Almost always, when fashion changes, I go “Oh, that was the wrong way.” However, I would say that half of the time the direction is certainly in the direction I praise. Things that stand out most to me are things I started early on that deserve more attention, then people gave it more attention. The prediction market serves as an example of that. The world was initially pretty skeptical, and then people became more interested. I thought that was great; however, most of it is still academic talk versus organizational application. Another area is the Great Filter. The work seemed to be pretty important, and I could not understand why more people were not interested. I took some time off of my postdoc to write about it. Eventually, that phrase took off and caught on.  

Allison Duettmann: Could you explain what it is for people that do not know?

Robin Hanson: Right. The Fermi Paradox is this idea of “Where is everybody?” You look around you and there is all of this life, but you look upwards and it just seems so dead. The Great Filter is a way of rearranging that question differently. There is this process that starts from matter, which is the simple life, then it turns into complicated life, then eventually it reaches to what we are. It could happen anywhere, and it did happen here. However, it clearly did not happen in a lot of other places. So, the overall rate of transformation from simple dead matter to ourselves is very low. Which is to say, there is a Great Filter in that thought that something has to get past this filter to reach this point. 

Our grabby aliens analysis was finally able to give numbers to that. We were able to say that our best estimate of which life could become grabby is a rate of one per million galaxies.

Allison Duettmann: What does grabby mean?

Robin Hanson: The key idea is that alien civilizations appear in many places. You have the typical idea of how it would appear; however, there is another kind that is really hard to miss. Why? Because they keep expanding, and from a long distance it will be obvious that something has changed. That is the sort of thing we can say we do not see. We cannot say with much confidence how many quiet ones are out there. Yet, that is enough to figure out their rated space time. We have a simple model of grabby aliens that appear in space time, it has three parameters, and each can be fit to data. It gives us these answers that they appear once per million galaxies, and if we were to meet them it would be in a billion years. It is pretty out of reach now, but it gives more concrete answers to the Great Filter concept that something is in the way.

Allison Duettmann: That is nice that you tied your concept from earlier in your career, with one that you utilized last year in your book.

Robin Hanson: Right. I had read a paper noticing that there was this power law that had been neglected. It said that the power law would apply this, but they left it at that. I said it has big effects and is being neglected. I went back to work out the implications of that law, and I think there is still further work. A general intellectual strategy is to find neglected things.

Allison Duettmann: What are other things where people are not following the power law?

Robin Hanson: Panspermia. This power law is about how the chance of advanced life appearing depends on how long it has been. The key idea is there is a set of hard steps to go through. The chance of appearing is the power law of time, where the power is the number of those hard steps. That is just a straightforward implication of the standard hard steps model. In our model of grabby aliens, it means that alien civilization appears at a power law of time in the universe.

But, if you ask about the panspermia hypothesis, it starts as the idea that there are two possibilities of how Earth started. One is that life appeared on Earth, and it took 4 billion years to get to where we are. The other is that there was another planet before earth. Life could have been developed for those billions of years there, and it transferred here to continue on. There are two relevant factors in comparing these. One, is how unlikely it is that one came from another planet and landed on this one. That is a hard thing. The other thing is that life had a lot more time to evolve. With this power law, we see that time adds a lot of value. Sorry for the long explanation.

Allison Duettmann: Not at all. I think you wrote something like it is not like we have the infinite freedom of the universe, but more so a boxed freedom. 

Robin Hanson: A million galaxies for the next billion years. At least we have that. It’s like, sorry you thought it was a billion galaxies by the next trillion years, but it is only one million. 

Allison Duettmann: And then once we meet them, can we say anything about how game theory would look like to cooperate with them? Do you have any thoughts on this? If we did not just want limited findings, but more so cooperation and survival, how can we make sure of that?

Robin Hanson: Most likely technology would have stagnated by then, basically run up against fundamental limits. So, you and they would be at roughly similar technological levels. There would be a key parameter of relative power for offense and defense. If defense was easier than offense, then you would expect a truce and a stalemate because it is much easier to defend. If it turned out offense was power relative to defense, now you are going to expect conflict until you make peace, so that would become the focus.

You will not see them until they are nearly here, which will be at least 100 billion years. Nonetheless, you will certainly see the warning signs a long way off. People will be really anxious for this meeting for whatever data they could get to judge for certain factors and look for a way to look for peace. You would really want to know a lot about them. There would be two key ways. Firstly, we would be able to look through our telescopes to see in detail. They may expect it, but they cannot do much from where they are. We may see wars still going on and realize they are war-like, if we still see these wars going on after a billion years. That would be one main piece of data.

Another main piece of data would be that more civilizations would be quiet. As such, if it is one of a thousand, then you would have seen roughly one thousand quiet civilizations along the way. You could use all of that history to judge what they may be like. All of these technologies would be far superior to ours. Nevertheless, I think it is a mistake to think about the conflict. It is more so we will meet them in a billion years, but there will also be so many years of potential community. We should aspire to be respected and find things that work best. I would rather people think that. I would hope they think “I hope we survive, accomplish something, and earn someone’s respect.”

Allison Duettmann: Yes, You also wrote a lot about value drift. Would you think there is anything we could possibly gauge about what they would find interesting in us?

Robin Hanson: I think most people interested in futures are overwhelmed with how our values may change overtime. I am optimistic about our technologies, but the value is the part that most people find grounds for pessimism. I can talk about that later on.

Allison Duettmann: Okay, I will just ask one more thing before handing it over to Beatrice. Is there a way that you can tie every field you have been a part of together? If you have any cohesive narrative, please go for it.

Robin Hanson: The only way to find out what you are is to meet a foreigner, and see how you are different from them. When you are in the ocean you swim in, you feel natural there and you are not different. When you are around people who are really different from you, that is what shows you who you are. I recently started this podcast with a philosopher named Agnes Callard. We called it Minds Almost Meeting because we have really struggled to meet the other’s point of view, given our different attitudes and backgrounds. It is hard, but it has given me a lot of insight into who I am. 

What makes my background different is I have shone through the light of seeing that high contrast. The biggest contrast I see in that difference there is that I have learned all of these systems of thought and she has not. She has learned logic and general reasoning, but I have learned thermodynamics, algorithms, economics, and statistics. If you look at my book “The Age of Em,” I am taking information from all of these systems to paint a picture of basic predictions taken from every system I know of. If you learn a lot of them, then you can draw from a lot of them to do a lot of things. That is also tied into looking through all of these fields and finding what has been neglected. 

Allison Duettmann: Do you think there is one area that you have gone into that feels undervalued to you?

Robin Hanson: I feel as though all of the areas I have gone into are things I feel that way about. A lot of them I have not taken all of the time to go into as I would like. I can list areas I have dabbled in but not pursued as much.

Allison Duettmann: Yes, please do.

Robin Hanson: I feel as though explanations for large arches in history are important. Also the rationality of disagreement. Why do we find the need to disagree with others when they are as smart as ourselves? I did a lot of work at once on it. There are a lot of institution findings, but not so many concrete observations. They look promising, and they are waiting to be tried. 

Allison Duettmann: Do you think that crypto already took on your prediction market pioneering and is actually trying to do that?

Robin Hanson: I do not necessarily think they are doing the thing that needs to be done, but I am happy that they are there.

Allison Duettmann: What is the bit that is missing?

Robin Hanson: For example, in crypto many projects have created crypto platforms where people can bet. You can deposit money, make a topic, and bet on that topic. There are about a dozen of those projects out there. The problem there is that the real customer is people who have decisions but need advice. Whereas these markets are instead going for the customer that speaks their mind, while the whole system gets paid for. That is a valuable thing, but it is not how most organizations work. Most organizations usually work because they are usually responsible for decisions, they have resources to ask for advice, they do analysis to support decisions, and then they make valuable decisions. 

Most organizations do not rely on a crowd of people speaking their mind about any topic they are interested in. What we do need are people that can develop particular applications that would support decision makers. Then, they need to go through the trouble of working out the details and making a track record to show it works, in order to convince decision makers that this is a good source to help them with their decision. If you can do that, there are trillions of dollars in value here in the topic of making better decisions. Very few people want to be the people who try something out before it has been tested. Those testers are what we are waiting for. 

Allison Duettmann: At that point prediction markets may turn into decision markets. Okay, so Beatrice will take over to see if you are interested in making predictions of what a positive future could look like. This is wonderful so far!

Beatrice Erkers: Thank you. A big part of this podcast is trying to show that there are things to get excited about for the future, as well as trying to envision the future in a positive way. You seem to have done a lot of thinking about how the future will look like, so I am excited for your answers to these more slightly philosophical questions. I will start off by asking if you would describe yourself as optimistic about the future?

Robin Hanson: I would say that I am. However, that is more of a value description versus an outcome description. As I said with Allison, there are two key questions when thinking about the future. Firstly, they ask if we will continue with the same technological growth that we have seen in the past. How long will it last and how far will it go? A great many people, such as my colleagues, have this intuition that the world could not possibly change in the future as much as it did in the past. They see this enormous change in the past and cannot possibly see how that will happen. As such, they implicitly think we are at the end of history in terms of economic and technological changes. That is how a great amount of ordinary people see the future, but I think that is just wrong. If you truly understand the details of what is possible, you should expect as much change as we have seen in the past. So that is optimism in my eyes when thinking about the future. Really large change and rates of change. Once you convince people that is a likely thing, they do not necessarily see that as optimistic. People think more as to whether we will like this new, strange world. 

In that case, many people see optimism as “What do you think of that strange world?” Now, I have seen how different the past was. I think most people are fooled by history and historical fiction. The past was really much different than we see in these historical fictions. Their attitudes and habitats were different. If we just realize how different the past was, we can fathom the same rate of change being possible in the future. Not only were their habits different, but their values and priorities were different as well. They love some things you hate and hate some things you love. In fact, my book “The Age of Em” touches upon that as well. It is as strange as you should expect it to be, when looking at the variation from the past until now. Ultimately, optimism comes down to the question of whether that variety of change is okay to us. Part of that is inherently related to if it is okay that the past was as different as it is now. 

As such, I will say I am an optimist in the sense that the possibility of our future world being as strange as it may be is okay with me. Others are more stuck on how everything is and just wanting it to be better within the realm of what we know. Over the last few centuries we have seen many trends, and science fiction keeps projecting these forward. We have seen these with culture models or Star Trek, and a lot of people are very attached to these ideas that the future will be more like these trends. For instance, increasing wealth, increasing leisure, less religion, more promiscuity, more travel, more art, less war, and so forth. These are some of the major trends of the last few centuries, so many people agree when talking of projecting these trends forward. My book “The Age of Em'' more so mentions the idea of not so fast, you cannot reliably predict the normal of now to predict over the long run. Strangeness and change should be more expected. A long answer, sorry. 

Beatrice Erkers: It was a long answer, but very very interesting. You seem to be fine with the strangeness coming, so I would interpret that as being optimistic. In terms of what makes you excited for the future, is there anything particular? Also, since this is the Existential Hope Podcast, can you share a vision of existential hope for the future?

Robin Hanson: I honestly think I am just excited by size. There is a trope in science fiction called the “Big Dumb Object.” Many science fiction stories and movies have a Big Dumb Object where much of the emotional energy comes from. Think of Ring World or a Star Trek movie flying around a spaceship. I like science fiction, I have to admit. I just like the idea that the future world will be really big with all of these little parts doing different things. In each thing, there are people living their lives with interesting stories adding to this overall huge thing. That idea alone just excites me. 

I had this strange dream once, which I described in my preface of “The Age of Em.” A long time ago, I was dreaming of this future world in some vast city. All I could see in this vast city was one person, where they lived, and the things they had. I could just sense how huge it all was, and that gave me hope. The fact that it would be a world still full of little people and little lives. I cannot necessarily tell you why that is good to me. It is just an axiom I guess. 

Beatrice Erkers: That is still a very good answer. You are obviously very excited, and I think a lot of people can connect to it. I feel excited about it. I wanted to jump into the next question. It is hard for people to envision positive scenarios of the future, which I guess touches upon your optimistic answer. Would you be able to speak on that again? Why do you think it is easier for us to envision dystopias than it is to envision utopias?

Robin Hanson: It seems to me like in discussions about things, emotional energy just gets attracted to negative scenarios and possibilities. If you look at political discussions, there is very little discussion regarding exciting possibilities. It is more so discussions about what could go wrong and all of those overwhelming topics. Also, much of science fiction futures swing that way. Essentially, in the earlier days at the Foresight Institute, nanotechnology was one of the main focuses. We saw this interesting phenomena where Drexler explained great hopeful changes that nanotechnology could make possible, as well as some things that could go wrong. Eventually, these public discussions were full of all of these things that could go wrong. This was so much so that when the government decided to have a project associated with that name, they distanced themselves from all of the positives because they thought it was just associated with all of these negatives. 

This is a trope of science fiction that I think is true of a lot of futures. A trope says that most science fiction is an allegory about the present. It is an indirect way to raise issues about the present through the indirect alternative world of the future. It is not really about the future. People do not really care about the future. They care more about their world and indirect allegories. That is actually a pretty good description of science fiction and futurism. For instance, the thing that grabs people most is global warming. This happens on a scale of two centuries, when many other really big things are going to likely happen as well. So, why all of this focus on global warming? You might say that it has a morality that I can get behind. Yet, that is really about today. If you start to think about this pattern and think about the future in your own terms, you will think of a lot of these things differently.

Beatrice Erkers: Yes. You touched on sci-fi a few times. Is there anything in particular you would recommend someone to get into futurism, or any of your fields? 

Robin Hanson: It is really hard. When you are young and interested in big, grand things, science fiction is very exciting. It describes all of these possibilities of the future. The more you learn about the world, the more you realize science fiction authors have not been accurate, with respect to many realistic issues. The readers do not know any better, so why should they bother. So, the more you learn, the less science fiction makes sense even as a weak allegory for the future coming. It almost has nothing to do with the future, which is sad because it was still fun for a while. So, I am really reluctant to point to anything concrete that I feel does a decent job because hardly anything does. I would recommend looking at more serious analyses and just looking to science fiction for entertainment. 

Nevertheless, one main function science fiction does provide is stretching scenarios or concepts in some way. It defies expectations or forces you to make decisions about something you do not want to make. I think science fiction does serve a nice role for those thought experiments. They are showing you that there could be examples that defy your current categories. They help you generalize your approach so that you can be more robust in some opinions moving forward. When you see all of these things could happen, it helps widen the perspective a bit. So, science fiction does not need to forecast the future accurately at all. It serves the function of showing you that there are possibilities that your future analysis has yet to account for. 

Beatrice Erkers: I can take the opportunity to recommend your book “The Age of Em” as an interesting read in relation to this.

Robin Hanson: It’s not science fiction. It is like science fiction, except there is no plot, no characters, and it all makes sense. That last part is the hard part for most science fiction: having it all pull together and make sense. 

Beatrice Erkers: Yes. It is hard to make sense in general. I will hand you back over to Allison for the last questions. 

Allison Duettmann: Thank you Beatrice, I loved hearing this. I think hopeful visions from Robin are always interesting ones. At the end of this meeting, what we usually do is ask a few questions around a more concrete story prompt. There is a concept, which I am sure you are probably familiar with. Toby Ord and Cotton-Barratt introduced the concept of a eucatastrophe. It is an event where the expected value of the universe, whatever that may mean, is much higher than before. Do you have a better term for this?

Robin Hanson: Good stuff. You know, I guess I would have to think of a better term for that. Something straightforward such as innovations, advances, or breakthroughs. Breakthroughs might be a good word, such as breaking through a barrier and having greater things on the other side.

Allison Duettmann: Okay, I love it. I also like “good stuff.” Well, we will definitely take it into account. The second question is if you think of such an event, could you describe one? Could you come up with a concrete scenario that would look like a eucatastrophe for you? I think Christine mentioned in our meeting the idea of reviving a frozen dog. 

Robin Hanson: I can tell lots of positive events. Certainly, a substantial progress towards reviving cryonics patients would be one. Obviously, just a much larger increase in cyronic customers would be one. I still find it quite sad that most of the world does not have to die, and they wouldn’t die if only there were enough customers for cryonics. In a short time, that increase in customers would be really great. Also, the first working brain emulation would further this new age of Em, which I wrote of in my book. That would be overall good. Also, effective interstellar colony ships heading out to expand our civilization would be good as well. I also have these ideas for social reforms, with any sort of substantial experiments for those on a small scale. These would lead to larger versions of them and positive change.

Allison Duettmann: Give us a few ideas!  

Robin Hanson: Well, decision markets. For instance, there is a fire-the-CEO proposal where you basically make markets on each company in the stock price, conditional on if the CEO is staying or leaving. You have those markets in the Fortune 500 running for 5 years. You collect data showing the companies that follow the advice did better than others. You then get boards and directors to follow the advice and change corporate accountability in a few years for a few million dollars. If you legitimize it through that decision, it would become legitimate for a lot of smaller decisions. That would sort of break open good change. See, I like “breakthrough.” I like that rebranding of eucatastrophe - breaking open the possibilities there.

Allison Duettmann: You also like “breakout.” I remember you mentioned it during a podcast. You wrote about another world government that could lock us in on earth, and that likely the type of aliens we would need could be the few that escaped. At least that has the possibility that we may or may not need to do the escape route. Do you want to say a bit more about that? What worries you so much about it? Does it tap into another version of a eucatastrophe that you could imagine?

Robin Hanson: Right. I think you are right that breakout is a subset of breakthrough. It interestingly connotes the idea that one breakthrough is to break out of the controls you may face. As you may have heard before, probably the biggest decision humanity will ever face is whether to allow interstellar colonists or not. You may think it is an obvious decision, but my book elaborates on how it is not so obvious. A lot of people will have a deep emotional attachment to not allowing that. Over the next few centuries we will continue having strong global governance and communities, which create more accordion and convergence around the world. That legitimately solves important problems people care about such as war, global warming, and innovation. 

All of these will be solved in part by having either a world government or world mob that effectively gets people to do what they say. People will like that. When they finally realize that if they allow an interstellar colony ship to go off without strong political offers on board to make sure they follow center commands, then that would be the end of this era. We would then return to an era of competition and conflict that would be out of control, meaning this civilization would no longer have a central governance that could impose its will on the periphery. I think part of it is really how evolution and competition would change our descendants. I think that is one of the biggest fears people have, that evolution would select for new values, styles, and attitudes, making our descendants alien to us. So, I think a new world government would limit changes like genetic engineering etc.

Allison Duettmann: Yeah, I think a world government would invoke collective suicide. It would be a curious answer to the Great Filter by being so worried about destroying ourselves and then causing this by accident. I think that is a pretty likely possibility.

Robin Hanson: Here is a metaphor I like for this change issue: the standard hypothetical transporter. The Star Trek transporter. Many philosophical classes have a long, standard discussion that asks about taking apart atoms then putting them back in a different arrangement. Would that still be really you? After a while, the class will be really split up. However, if you ask the same question in another way, they agree: You just walked out of the transporter, is that new version really you? They will say yes, that is me. However, it is a good metaphor for when we look towards the future. We are just not so sure we want to endorse all of this strange stuff, yet we feel strong kinship when looking to the past of our ancestors. I think that is how it will work as well in the future. However weird these people will be to us, they still will feel as though their essence has not changed when they look back at us.

Allison Duettmann: That is a lovely metaphor. We are pretty much at the end of this. What is the best advice you have ever got?

Robin Hanson: I am going to disappoint you this time, because I do not have a good answer. Nevertheless, I remember when I was going through tenure, I was doing all of these interesting things with the Great Filter. My colleagues said if you want to get tenure, you need to focus on economics for a while and then do all of this other stuff. I listened to them, got tenure, and spent years after doing all of this other stuff. So, that advice was really effective. So, I would build on that to say that whatever problems you are really interested in, do something in the next ten years to set up your life so that you can focus on that later on. It is a good investment. 

Allison Duettmann: That is a great segway into the final question. As you mention that idea of waiting ten years, that would mean that your AI timelines are more stretched out than that. Would you say a few words about your AI timelines and how that affects our chances for a positive future?

Robin Hanson: I did this recent statistical analysis of all the jobs in the U.S. from 1999-2019. I looked at how automated each job was in each year. I looked at the determinants of automation, which turned out to be very classical. Then, I looked at if there have been any determinants of automation that changed over time. The answer was no. They were the same determinants as twenty years before. Was there any correlation in changes of wages or number of workers? Also, no correlation. That says to me that over the last twenty years, there was basically no change in the nature of automation. There was only modest change in the quantity, and that has been going on for many decades. Roughly every thirty years, we see a burst of anxiety around automation and AI wondering if we can make it different.

 It has yet to happen, and I project that forward for another few thirty year cycles before it does. I do not see any indications that it will be different in the near future, although it will be eventually. However, a lot of people are pressed with inside views of recent demos and what all of these new things can do. I used to feel that way as a physics student, reading so many cool things about AI in 1984. Nevertheless, it did not happen then. I think consistently over the decades, people are just over impressed with new demos in new machines and capabilities. These findings do not necessarily mean we are at the core of this change.

Allison Duettmann: Yet, you still think we will get there eventually, even if your AI timelines are not short. 

Robin Hanson: Of course, but I feel Ems will be more likely to appear first before full level AI. I think that we will get plenty of warning, because it takes time to make substantial increases in technological capacities. There are big fluctuations, but they are rare. We see steady trains, abilities, and progress over longer periods. I think emulations are more likely to be a disruptive transition versus classical AI. Still, I think in both cases we will get lots of warnings.

Allison Duettmann: I think that is a wonderful way to end it. Thank you so much for coming. I know we stumbled across a variety of different fields. There is no other way to do it with you, otherwise I would be fearful that I left an important topic out. Thank you everyone for joining. I hope to see you all very soon!