This episode is a special episode where we interview Anthony Aguirre & Anna Yelizarova from the Future of Life Institute (FLI) about their Worldbuilding challenge.
FLI welcomed entries from teams across the globe, to compete for a prize purse of up to $100,000 by designing visions of a plausible, aspirational future that includes strong artificial intelligence. In this interview we talk about the concept of worldbuilding, what we need to build better worlds, and why it is important to encourage the imagining of more positive visions of the future.
Anthony Aguirre is a theoretical cosmologist. Aguirre is a professor and holds the Faggin Presidential Chair for the Physics of Information at the University of California, Santa Cruz. He is the co-founder of Metaculus and Future of Life Institute.
Anna Yelizarova manages multiple projects within FLI and helps with operations and growth. She completed a Bachelors of Computer Science at Stanford University and a Masters in Communication. She focused her graduate research on the study of people’s behavior in virtual simulations at the Virtual Human Interaction Lab (VHIL) where she helped program and 3D model the virtual worlds for the studies. She currently spearheads FLI's Worldbuilding contest.
This Question and Answer series highlighted below showcases a Foresight Existential Podcast given by Allison Duettmann and Beatrice Erkers. Special guests Anna Yelizarova and Anthony Guirre, from the Future of Life Institute, join in to discuss details of their work alongside their journeys that led them up to where they are now. Topics including risks and benefits of AI advancements are touched upon. Additionally, Anna and Anthony share their own viewpoints as to how they think of the future, and what motivates them during these trying times. Furthermore, details on the worldbuilding contest, hosted by FLI, are elaborated upon, alongside what that entails for progressing hopes for our collective future.
Allison Duettmann: Hello everyone and welcome to the Foresight Existential Podcast. I am glad to have you all here. I am joined today with Beatrice Erkers, alongside Anna Yelizarova and Anthony Aguirre from the Future of Life Institute. As many of you know, the Existential Podcast is a bit of a different podcast in the sense that we talk with the thinkers and doers we feel most inspired by. Not only do we look at the technical details of their work, but we also look at what excites them about the long term future, what inspires them, and what motivates them. I believe the Future of Life Institute is really one of the greatest organizations to have on board here as they are most helpful. Your organization is a fantastic go-to resource for introductory material regarding various risks, from Vita autonomous weapons to viruses and AI. Much of the Existential Hope website, which will be live once this comes out, links to many resources on that website.
While you take all the risks, you do not shy away from truly addressing them. You gather people with long-term questions and help to create a beneficial mindset. You also have a really fantastic podcast, with folks like George Church, Sam Harris, Sean Carroll, and so forth. To all you listeners out there, this is a really great one to subscribe to. Additionally, you are behind the Unsung Hero Award, which is handed out to folks who have been truly instrumental in improving the world without getting much recognition at the time for it. Last, but not least, Future Life Institute is led by tremendous individuals, including Anthony Aguirre who is with us today. He is a physicist and cofounder of Meticulous. We are also joined by Anna Yelizarova who is spearheading an actualized wealth building contest, which we will talk about in a few moments. I would like to officially welcome the both of you. Feel free to introduce yourselves, one by one, and what brought you to the Future of Life Institute.
Anthony Aguirre: I am Anthony. I have been a physicist most of my career, as well as a cosmologist. Part of my job is to think in very large distance scales and long timescales. What really started the Future of Life was a group of me, Max Tegmark, and a few others thinking about lengthened timescales in the big picture of what happens when we apply that to humanity and our life on Earth. The universe that we know of has been around for 13.8 billion years, and it still has trillions of years ahead of it. Things are eventually going to be very different than they are now. We currently are only looking at the world for the last thousands of years, but it is going to be around for much more. What does that mean? How can we think about those questions? The Future of Life Institute was born of that big picture. We are sort of at an inflection point given there are major variables like intelligence, technology, and computation that have been advancing. The world is going to be very different, even in the near future, so we wanted to jump in. That was sort of our background and what led to these long-term issues. We started back in 2015 and 2016 with a sort of like-minded group of people and it has just been fun and growing from there.
Anna Yelizarova: I will chime in with a few words about my own background, as well. My name is Anna and I have been with FSI for almost two years now. My background is in computer science and communications. I spent my undergraduate career studying technology and how to build it. I spent most of my Master’s thinking about why we are building in the first place. What are the implications for society and people in general? Then, when I was out of school, a lot of those events had a big influence on what I wanted to do. That led me to this AI safety world. Also, thank you Allison for wonderful opportunities like these. I hope more people will start to see these sorts of discussions and peak their interest for bettering our future as well.
Allison Duettmann: Wonderful, thank you. I would love to hear you guys describe a bit more in color. What is FLI really working on in terms of long-term mission? What are a few kinds of crucial projects you are really excited about right now?
Anthony Aguirre: In terms of mission, I think we have had to consider both the sorts of risks and rewards that this big technological change is going to bring. We see that things are going to change dramatically. There is a concern that if you take any system and change it drastically, things could potentially get worse. The way this doesn’t happen is when people work really hard to make it better. I think we have a sort of conundrum there given the default trajectory of a lot of world dynamics currently occurring that could easily make it worse. We really have to take care in making sure that doesn’t happen and simply steer it in the right direction, but both directions are there. There is also a potentially huge upside there to how we see the future and what we see.
All of these technologies turning into what kind of institutions we can build to better our world is the exciting part of it. Additionally, a lot of our focus has been on artificial intelligence as the next big, transformative thing. That has been much of our focus on both the risk side of catastrophic or existential concern, as well as what institutions we need to create it as a force for good. As such, there are a number of projects in AI technical safety research and funding in AI policy trying to work out what sort of governance structures should be. I was actually going through a list yesterday that had 45 significant projects on it thinking, “Oh God, that is a lot going on.” Nevertheless, I would say one of the ones I am most excited about is what society will look like if these initiatives work two or three decades from now. We see a lot of institutions that are sort of failing us, or muddled into bureaucracy, and they are not working in the way we need them to. A lot of reform needs to take place. What are the replacements that are up and coming? How do we create ones that aren’t just capturing things in new ways? For instance, social media was exciting and now it is just causing a lot of problems from a variety of angles. How do we build institutions that will hold the right dynamics and lead society in the best direction? There are also positive futures, alongside the world building contests, which I hope we will elaborate a bit more on today, as well. These are some of the concepts we are most cautious and excited about right now!
Allison Duettmann: I will have to go through that list you mentioned again. I know whenever I go through your projects, I see them all as really important. I imagine it can be difficult to prioritize amongst them. I think you gave a wonderful segue into the wealth building contest, which is spearheaded by our next speaker Anna Yelizarova. I think if people want to take a minute to check out the website, it is truly beautifully done. I feel as though we sometimes lack the ability to almost conceive the positive features we are aiming to build. You are sort of a shepherd in the way of setting the tone to construct something that has an aesthetic and it is really inspiring to participate in. Perhaps tell us a bit more about what this contest is and how people can get involved.
Anna Yelizarova: The contest launched in January and it is open until mid-April, so we are approaching the last month, but there is still enough time to apply. We set constraints for you to think about a world and we asked for elements to submit, such as short stories, a media piece, a timeline of events, and so forth. We are also asking for an answer to a series of short prompts that we have about the spread. The interesting constraints we chose include the world being set in 2045. Additionally, the constraint that the world is good overall, not a utopia, but not a complete dystopia either. These caveats are there to help people brainstorm solutions to problems we find we are experiencing even today. It is a really fun initiative combining analytical and creative skills, and it is for anyone. We have plenty of platforms where you can meet others to partner up with, and so forth. It is supposed to be collaborative and fun, calling to writers, scientists, policy researchers, AI researchers, and anyone else. Essentially, it is trying to get our creative juices flowing to imagine hopeful futures and work through the details of how exactly to get there. That is a bit more about the contest! All the details are at worldbuild.ai
Allison Duettmann: Wonderful. Could you fill us in a bit more on what exactly world building actually is? What are the reasons we need it? Why were you excited about launching such a contest?
Anthony Aguirre: Yes, so you said earlier that we often have trouble imagining what a positive future would look like. A reason that is hard is because it is difficult to imagine two very different worlds. A lot of the worlds we do imagine have been built in the movie industry and others, tending to go in a more dystopian direction. So, a lot of effort has gone into developing visions for the future for that commercial purpose, leading to a certain set of pictures. However, we can use the same vision apparatus to construct the whole world in which a fictional narrative is taking place, but in a more positive sense. Take the Lord of the Ring series. JRR Tolkien famously invented whole languages, drew maps of societies, and created a whole world simply to set his novels in. That is being recognized as a crucial component in itself in world building. How do economics work? How does science or magic work? What are the rules of the world? What are the artifacts that exist? Without those details, it is not possible to properly imagine that world. So the purpose here was to use it both as a tool for encouraging people to think through those details of what the future could look like and build concrete scenarios. Going beyond that, the creative and experimental component allows people to feel themselves in those futures, letting it sink in on a more visceral level. What could positive futures look like, rather than vague ideas of dystopian ones. Don’t get me wrong, it is important for people to understand disaster scenarios so we can avoid them. Nevertheless, we do not only want to think about that end of the spectrum.
Allison Duettmann: I remember enjoying the wealth building exercises when I was there. They still stick with me. I think having people go through the motions inspires them to think through what things could truly be like, seeing the future through a new perspective. I am eager to read through, listen, and watch all the proposals that will be submitted. I am quite excited for this contest. How does the concept of world building relate to existential hope?
Anna Yelizarova: We intentionally set our world in a hopeful future and that was helpful for a number of reasons. For one, it is much harder to imagine every single thing that could go right, as much of what we see in Hollywood and in our media shows what is going wrong. In a way, you have to explain how all the issues played out for a positive future to go right. How did we avoid AI problems and risks? How did we avoid global conflict and nuclear war? Much of what FLI and Foresight address should be able to help someone envision a helpful future. I think it is a beneficial exercise because it advances problem solving and hypothesizing. What do we need to get there? What kind of institutions can we form? How do we organize ourselves to ensure that future.
Also, the second part to counterbalance that is more so the storytelling inspiration. I think it does have an effect on us when most fiction is set in a dystopian future. I think as a society we have a very negative relationship to the future, as opposed to a positive one. Hopefully, the more productions created for these visions of a hopeful future, the more hope people will have, allowing us to have something to work towards. When everything seems bleak, the mindset shifts to, “You know, why try right?” However, these aspirational visions give people the drive to enter that career path, study certain topics, and work a bit harder to get there. We are hopefully inspiring people into action to execute these visions. The relationship is to both crowdsource ideas and inspire people to work towards that.
Anthony Aguirre: I agree. It is almost too easy to build dystopian ones. I want to add one other thing. We do get lots of positive visions for the future, you know, new products, services, startups, and all these things they sell to us. So there is all this hopeful promise, and some of it is true, but what is missing is the interaction of all those things with one another and with the dynamics of society. We have to ensure that these positives come together to create a wonderful world, because we have created wonderful technologies thus far, but are we happier? It seems less clear than it should be. Since 20 years ago, we have had technology, AI, biotechnology, and so forth, but is the world better? The crucial component is not necessarily the technology that exists, but more so how we use it for the world and people in it. So I beleive what distinguishes world building is that you have to think how does it all fit together? What does it mean when you put these technologies to the test in societies and environmental dynamics collectively? We have to ensure the details, such as reducing inequality, strengthening economics, and increasing quality of life overall. It has to be more ambitious, otherwise it is too simple to create individual positives as we have until now.
Allison Duettmann: You both got me excited. How do I join? Also, what is beyond world building? Can you give people an understanding as to what they can expect? What do you plan to do with the products and the fruits of this beautiful labor? What’s next?
Anthony Aguirre: I can say a little about that. The contest will happen and there will be a lot of entries, which I am looking forward to. I think the opportunity for a lot of people to read these essays will be powerful in changing a lot of mindsets, which is one of my goals. The winners will have a lot of publicity and try to get people to have contact with these products. There will also be plans to have a screenwriting contest for some of the worlds created, alongside future plans to develop that into a movie making contest to transmit these ideas further. In addition to those plans for furthered media, we have intentions to commission future work from the winning teams. If you have these amazing people that have built a world, they can do it again. Rather than have it be 2045 with these constraints, we can think of a whole other scenario. Obviously the world will not work out exactly like these films, as the future is very hard to predict. As such, we want to think of different scenarios. One of my hopes is if we see a handful of these hopeful worlds holding the same thing in common, let's try to focus on making that kind of thing. Whether it is universal basic assets or really good AI systems, if it keeps coming up in hopeful worlds, we better have that for the best chance. We want to create different worlds and scenarios to experience and then learn from. We hope for takeaways in terms of what policies, governance, and social structures we should be trying to encourage now to make these possible realities more realistic.
Allison Duettmann: I also love that it is really a forcing function getting a lot of different people from different disciplines to cooperate with one another. I think even collaborating with a fantastic storyteller or someone focused on AI creates much more inspiration to be building that sort of future. Oftentimes, the bits and pieces assembled from different locations are truly fruitful and multidisciplinary. Okay, wonderful. You already mentioned that this year in this one is set in 2045. Could you explain a bit more on why you chose that year? I already love what you mentioned so far. Could there be different years or even different topics, moving forward?
Anna Yelizarova: Yes, I will answer this in a second, but I wanted to mention one last thing. There is a lot of prize money to be won. Firstly, there is up to $100,000 to be won, with the winning team receiving $20,000. In taking this seriously, these fun thinking exercises can pay off. We are trying to encourage collaboration, so the prize purse grows with the size of your team, which is highlighted more on the website. Also, the 2045 choice is interesting because it is a bit closer to us and feels reachable. If we pick something too far in the future, it almost feels like working with a blank canvas. However, 2045 includes trickles of the current world still as a framework for thought. It really puts an emphasis on these problems on the horizon. We are not necessarily looking for submissions of worlds where everything is fine and dandy, but more so worlds that have friction and conflict. It is about how those realistic obstacles are addressed by human ingenuity, collaboration, and our efforts.
Allison Duettman: Wonderful, yes I think that is important as a really good story requires fiction and struggle. Thanks, Anna. I do think 2045 is close enough. It is crazy to think of 2000 and how fast we got to this point. Time really flies, so 2045 is upon us, even while it feels incredibly far away. We will tie a bow on the wealth building contest and FLI, handing it over to Beatrice to guide us into a bit more of the existential hope part of our conversation.
Beatrice Erkers: Thank you Anna and Anthony for joining us today. Before I get started, FLI has been so meaningful to me. I found out about FLI through Max Tegmark and his books thinking to the future. I had not been aware of this whole ecosystem of organizations working on, and thinking about, the future. So FLI was a funnel for me to get into this ecosystem. I was a fan of FLI before I knew Foresight existed, so it is really interesting to hear you talk about what FLI does and the world building system. So I will direct this question to both of you, and Anna you can start. Can you share if you have a vision of existential hope for the future?
Anna Yelizarova: Sure. I will start with something not too Utopian and more grounded in the near future that is more realistic to strive towards. For me the technology is less important than the people. It is about seeing a more abundant world with less scarcity and a better quality of life for people across the globe. AI is a huge part in making that happen. In a more productive society, we all need to work less. However, even if you have advanced AI systems that make life easier, you could see the fruits of that labor being concentrated in a few hands. My vision of existential hope is a future where we figure out our values once and for all as humanity, where the incentive structures are worked a bit. In that world, the main driving force isn’t an unintentional market force. I would like to see us live more sustainable, healthy lifestyles. I want to rethink our relationship with work to find fulfillment and purpose. I would love to see a future where all of us work a lot less, getting excited as to what that looks like. What communities will we form? Where will people spend their time? Overall, a future where everyone’s needs are met. Of course, we can work to enhance our lifestyle, but seeing a world where we are paid for going to school, for instance, would be nice. I have been out of university for 6 years and I always discover things I wish I studied in school, but it is a huge effort to go back and get a loan and so forth. Maybe if we could all work less, we could spend more time rethinking our relationship with these beneficial aspects that matter most. Ultimately, I think of a world where we aren’t working to survive and are more so rewarded for the things that matter most. It becomes a different playground of how we are spending our time. There will be less friction, because there is less scarcity. I think that is fascinating.
Beatrice Erkers: I think that sounds great. Anthony, do you want to share?
Anthony Aguirre: Sure, I would endorse everything Anna said. Also, just raising everyone’s quality of life to a better level would be gigantic. I think the overall feeling I have in my vision is that we manage to keep empowerment and agency with people and their decisions. Currently, many people feel like they are powerless, unable to change things, and stuck in this big system they are unable to understand. I think that could get worse, if we do not do something good about it. I am excited about a future where there is a lot of capability built in the system, where people collaborate together, respecting the way that human social dynamics work, mediated with the help of technology. As humans, we are good with making decisions between smaller groups, say about 20 to 100 people, but we are not so great at making bigger group decisions. We haven't evolved to be able to do that yet, but we do have technology to help in governance and social institutions moving forward. That is one of the things I am most excited about. I think we need to reinvent these things with the technologies we do have. Along with material abundance, there is social and agency abundance among people. We must keep in mind that these institutions exist to serve us, and we can make it work for us.
Beatrice Erkers: Definitely. This touches upon my next question. Do you see potential risks and how we could avoid them? You hinted at that, with us still being in charge. If you see any potential risks feel free to elaborate on what they are and what we need to get around them.
Anna Yelizarova: Yes, I would say we need better governance of these technologies. There are a bunch of what ifs, such as degeneration of AI, or advanced AI systems could lead to a dystopia, and so forth. I think better governance systems for handling potentially dangerous technologies is the single most important thing. For all technologies, there are going to be risks. All of it will develop quickly, from AI to biotechnology. There could be, God forbid, more issues on the pandemic front, or even something more subtle, such as what we are seeing in the sensemaking ecosystem currently. That is a huge problem right now that was unintended. Nevertheless, I think the solution lies in better institutions and safeguards against the groups building in these different silos. It also has to come from helping prevent race dynamic scenarios. Also, so much of technology comes out of the military and weaponizing things. I would love to see us build things for other reasons. Luckily, the AI community is growing and I am hopeful that a lot of people are now thinking about the risks so we do have the momentum to try and influence people building these tools in a positive way.
Beatrice Erkers: Do you have anything to add Anthony?
Anthony Aguirre: Yes, she brought up an important point, which is, if you think back 10 years ago, this whole ecosystem and people trying to build better worlds didn’t exist. There were a lot of people thinking and trying to improve the world, but technology was advanced at the hands of those trying to weaponize it. Now, how do we systemically think about the future and all of the failure modes, trying to avoid them and build institutions in a global way? We now have existential risk conferences with a thousand people. This didn’t exist 10 years ago, so it is an amazing progression. On the optimistic side, people are working hard to make things better.
I would say in terms of risks, there are the obvious catastrophic and existential risks we are worried about. I also worry about the slow building risks we sleep walk into because they are vast systems we have designed. All the money maximizing systems, the social media economy, the media ecosphere, and so forth. We built these big systems and no one is in control of them. Once they get powerful enough, it is hard to dislodge them. I think in that sense, it is good there is so much attention to the things going wrong in the media system and the extractive capitalist system. It is hopeful in a sense that we are not ignoring them and more people are becoming conscious of these problems. The risk is not a small probability, it is a very big probability, but it is being realized and that is what matters.
Beatrice Erkers: Definitely. Those slower risks are harder to get people to pay attention to, but it is hopeful that this awareness is growing. Anthony, do you see yourself positive about the future? What would you say has made you so?
Anthony Aguirre: I would say that one of the things at FLI holds in common is that we are intellectually pessimistic and temperamentally optimistic. It is very clear the bad directions that the world could go in but we are not sitting around, heavily drinking bemoaning the state of the world. Everyone is thinking how to productively and genetically change things to make it better. When you see these negatives, you feel like there is nothing to do about them, which can be depressing. However, as soon as you feel some capability and agency in taking tangible steps forward, it has a real positive effect. You still see problems, but you are working hard to do something about it. If I had to lay out probabilities for how the world would turn out, realistically I would say a little more on the bad side than good. I am a meticulous pessimist. Nevertheless, I think you can think the same way about individual life. There are a lot of ways it can go bad, but you try really hard to do your best, going to college, getting a job, finding a partner and so forth. I hope that as humanity we will do that too, as opposed to becoming depressed. My outlook is mixed I would say, but that is just me personally.
Beatrice Erkers: Anna, what about you?
Anna Yelizarova: I like to think I am a generally optimistic person. Yes, there are risks that keep me up at night. Even though there is a narrower path towards a positive future, I still always hope for the best. We can always complain, but I think our lives are much better than past stories I have heard growing up. Things could go wrong very quickly, but they have been progressively improving, so that gives me hope. We are a lot more fortunate than we were in past generations. Whether that is sustainable however, we will see. My family grew up in the former Soviet Union and it was much harder than it is now. To see the difference in the quality of life, freedom, and choices is pretty staggering, so it gives me hope.
Beatrice Erkers: I like what you said Anthony. One final convex stretch of the imagination here. The term eucatastrophe is used to describe an event that is the opposite of a catastrophe. So after an unpredictable event, it is where the expected value of the universe becomes much higher. What we try to do on this podcast includes ending with our guests going through a similar exercise. Is there a better term for the word eucatastrophe? Also, describe a day in the life. What would a eucatastrophe event be like, after which you would feel much more optimistic that the future will go well. On the back end we will work to create a story prompt, writing a story and creating an artwork that actually visualizes this day in the life and the future you are describing. Perhaps, Anthony, you would like to take a stab at that first. What would be a fake moment like that for you?
Anthony Aguirre: I think there is in physics, the term is phase change, which you know describes not just things like water and gas, but other systems with qualitative change as well. Phase transition, where the system is just fundamentally different, is another term as well. I am not sure if positive phase transition is any better than a eucatastrophe, but that’s kind of the way that I think about it. That there is something fundamental that is shifted, but in a good way. I have a worry that this may be something that is too wishful in the sense that humans and our desires are complicated and complex. I will still make a go. One of the things that came out of the augmented intelligence summit was fiduciary AI assistance. I have been calling them loyal AI assistance. there is a loyal AI system that doesn’t have selfish interests and works to advance your goals and interests.
I think this developing idea is very empowering to individuals, if AI are able to know us very well to assist us versus manipulating us. We would rather it be the first, obviously. I imagine a transition being widespread availability of high powered, loyal AI assistance. You can imagine some helping with science and complex problems, alongside others helping with our everyday life and navigating information dynamics. This would be a system or companion, not annoying like Siri or Alexa, because you would be in control. That is an example I feel pretty good about, because it is not a single thing happening, but explicitly constructed to figure out those complex goals and help on a daily basis.
Beatrice Erkers: Wonderful. Yeah, I remember when we had a brainstorming session on this and I think it is something incredibly tangible and not too far down the road. We will let you go now, but thank you so much. I think that was a great final point. It was wonderful to have you on board for today’s podcast Anthony.
Anthony Aguirre: Thanks for having me. It is wonderful to have been here and a part of some of Foresight’s wonderful work. The institution is one of the visionary’s of the world.
Beatrice Erkers: Wonderful, thank you! Anna, we would still love to hear from you about a eucatastrophic moment you would think of.
Anna Yelizarova: The term itself is reflecting not on something good happening out of the blue, but everything at the brink of collapse and suddenly something positive comes. I think a eucatastrophe cannot come without a looming threat of current catastrophe, at least that is how I am interpreting it. In a way, it would follow something horrible. I wonder if we are thinking about AI and what that could look like. Many people are fed up with their needs not being met and the nonstop churn. I could see friction building up where things are getting violent and very dicey, to where some agreement is reached to try and address those needs. One paper that was interesting out of FHI was the Windfall Clause. This was an agreement where all these companies were building AGI. They agreed beforehand that if they build AI then a certain percentage of their money is reallocated into a trust. A eucatastrophe for me would be an event where we collectively agree enough is enough and we break out of this paradigm, becoming better people. That is something I would think about.
Beatrice Erkers: Thank you for sharing that. Also thank you for joining us today, helping run FLI, and the world building competition. You have helped us both understand how important it is to think of these things, as well as how hard it is. It is going to be a bumpy ride, but let's keep pushing. I think that is everything. Would you like to share when the last day for submission is?
Anna Yelizarova: Of course, it is April 15th. That is when submissions close. We will then announce 20 finalists a month from then, and then winners will be made public sometime in june.
Beatrice Erkers: That is great. We will try to have the podcast out by then. Thank you for joining us today.
Anna Yelizarova: It was lovely to be here!