Podcasts

Richard Mallah | How Aligned AI Could Help Us Create A Flourishing Future

about the episode

What could the world be if we managed to get aligned AI? And are we be able to make such a future happen?

‍

‍This episode of the Existential Hope podcast features Richard Mallah, Director of AI Projects at the Future of Life Institute, where he works to support the robust, safe, beneficent development and deployment of advanced artificial intelligence. He helps move the world toward existential hope and away from outsized risks via meta-research, analysis, research organization, community building, and advocacy, and with respect to technical AI safety, strategy, and policy coordination.

‍This interview was recorded in July 2022.

Xhope scenario

What could our future be like if we manage to create aligned AGI?
Richard Mallah
Listen to the Existential Hope podcast
Learn about this science
...

About the Scientist

What could the world be like if we managed to get aligned AI? This art piece envisions a positive future in which aligned AGI (Artificial General Intelligence) has helped humanity solve most of its problems. What we are seeing in this vision is a new and abundant world, with a wider moral circle for all living things. The vision that is the basis for this art piece has been proposed by Richard Mallah, Director of AI Projects at the Future of Life Institute, where he works to move the world toward existential hope via the safe and beneficent development of advanced AI. To hear Richard expand on his vision, and tell more about his work at FLI, listen to his interview in the Existential Hope podcast: https://www.existentialhope.com

‍

...

About the artpiece

Linda Rolland, AKA Kalon Glaz, is a leading ArtistoPunk, author, and curator working within the digital art sphere. She is part of the cybernetic and see punk ecosystem and has curated and exhibited with leading digital artists such as Pia Myrvold, Pierre Friquet, Yann Minh, Patrick Moya, Nao, France Cadet, Zaven Paré, Filippe Vilas-boas, Tutsy Navarathna, Nicolas Magat, Bernard Szajner.
Recently she presented her work at the Satellite Gallery in Paris for the “Happy birthday Robot,” celebrating the 100th anniversary of the word robot.
She teaches “creating your metaverse” in collaboration with Yann Minh at the Intuit-lab School of Art and Design.

‍

Transcript

XHope Special with Foresight Fellow Richard Mallah

The Foresight Existential Podcast is co-hosted by Allison Duettmann and Beatrice Erkers. This XHope Special includes Foresight Fellow Richard Mallah, a director of AI projects at the Future of Life Institute (FLI) for over 8 years. Contributing to many advancements in the field of AI and AGI, Mallah shares his insight into the work he has been completing, his day-to-day, what got him into the field, aspirations he has for the future of humanity, and much more. This podcast episode also provides a special focus into the importance of existential hope, looking to the future, and planning ahead for the best outcomes.

Allison Duettmann: Hello everyone and welcome to the Foresight Existential Hope Podcast episode. This is a special podcast episode where we invite our senior fellows to discuss a little bit about what gets them excited about their work, what gets them excited for the long term future, how they feel we should navigate to make those futures amazing, and how people can start thinking about these fields. Hope is important, but it alone is not a strategy. So it helps to see how we can be hopeful while developing a strategy. With that, I am happy to have Richard Mallah here today. Richard is an illuminary in his field and has been for quite some time. He has also been a director of AI projects at the Future of Life Institute for almost 8-10 years now. I think it helps to set the stage for AI long-term safety, and I recommend everyone check out the Future of Life Institute playlist on those conferences, which are truly great.

Richard, you also produce these wonderful maps on the AI safety and policy landscape, which I really enjoyed digging into a few years back. More recently, you have become a Senior Advisor to some really interesting ventures that I don’t know too much about. More or less, they involve producing a bit more of an AI safety ecosystem, which has a lot of long term potential. You have joined as an honorary Senior Fellow, and we have already had the pleasure of a virtual chat during our Vision Weekend, which is our annual member gathering last year. There we talked a bit about AI safety, with a bit more of a technical discussion. If anyone is interested in that, I invite you to check that out.

Today, we will talk a bit more about hopeful, long-term questions, but we may not be able to help if we get a bit concrete as well, which is probably for the better. For now, let’s start with a few questions regarding people getting to know you and your path. If you want to just introduce yourself a bit more, what are you working on and what led you to where you are today?

Richard Mallah: Sure! I work to make the coming transition towards AI go well for humanity and the rest of the life we share this planet with. It is probably going to be the biggest event since the taming of fire. It could turn out to be the greatest thing that happens to us, or the worst. I have a pretty broad remit in the space. I do meta research, analysis, advocacy, and field building in AI safety, with respect to the technical strategy and policy aspects. I also spend a lot of my time these days planning a new series of integrated projects in this space, which I will be hiring for shortly, but a lot of that is still TBA. I will be doing things such as delving into safety properties of multicriteria approaches, flushing out methodologies for scanning for aggregate risks, considering how correlated the risks of different proto-agi systems might be, as well as creating tools for researchers and for policy makers to understand trade-offs and how to manage CITRATO better.

In terms of my life story and how I got started, I will start from the beginning. I was lucky enough to be born with a book on my bookshelf called “One, Two, Three…Infinity” by George Gamow. It introduced me to so many topics. Math, physics, cosmology, and philosophy. It also introduced me to multidimensional thinking, topology, and relativity. Also, it taught me that it is key to be open to the infinite. It was sort of an invitation to keep me expanding my horizons, so it was very influential. A second book that was very influential when I was a little kid was “Whole Earth Catalog,” which of course we have talked about at Foresight a few times. It was one of the first books I ever bought. It introduced me to holistic thinking, to a combination of distributed and centralized collaboration, to work for a better world, as well as how people can be creative and make a living at the same time. That was also very inspirational.

I was very much into hard sciences, like engineering, alongside social sciences and business. I also started programming very early in elementary school. I realized at the age of 13 that it would be possible to create a program that programs a program. That is what got me very interested in the mechanics of AI versus the Sci-Fi of it. In highschool, I read the “Gödel, Escher, Bach,” and that was very inspirational as well in this direction. It was very theoretical. I then focused more on tangible machine learning and AI in college, which at the time included things like probabilistic graphical models, and SBM’s were all the rage. I was very much into metaphysics, philosophy of mind, metacognition, and meta ethics. These were all actually important it turned out for dealing with these long-term issues and how AGI should unfold. I would say those were a lot about the “what” whereas a lot of CS, ML, and AI courses were about the “how.”

Essentially, I spent over 20 years doing AI R&D in the industry, alongside a mix of different rules, algorithms, research, being a research manager, development manager, manager consultant, chief scientist, CTO, and so forth, but usually pretty hands on R&D. I also got the chance to work on trustworthy AI in industry, so a bunch of different projects there. I was also in a position during the financial crisis where I managed various sorts of risk management systems, such as counterparty risk and enterprise risk management systems, at the largest asset manager in the world. That experience really lent a visceral appreciation for the interplay between systemic tail risks, technology, multi-scale foresight, the agency we do have to reduce risks, as well as catalyzing systemic improvements in time.

Regarding AGI, it really clicked for me in 2010. I went to the first 100 Year Starship Conference. I forgot how or why I was invited, but I was. This was a conference by DARPA, NASA, and a few others about how to actually make a real starship within 100 years. That was my first real long-term exposure. I got this email randomly from my work email, where I was doing very near-term AI stuff, and I decided to go. I took a plane to Florida and there was a talk there on AGI. I realized at that point that this was plausible and that rocked my world. A couple of years later, I went to an AGI conference held at FHI in 2012, which opened my eyes to a lot of the risks I sort of thought could be very disruptive. It really crystallized my concerns. From that point, I was searching for what I could personally do to help safety and existential stability in the face of AGI.

Allison Duettmann: Wow, I love these questions because you may never know how someone got into the field. That is such an intriguing story. For those who may not know, FHI is the Future of Humanity Institute, which works closely with the Future of Life Institute. We will definitely add the “One, Two, Three…Infinity” to our website as it is very interesting. I also think you have produced at least two products with Lucas Perry together, which were these maps and the landscape of AI safety. For someone who may be new to the field and listening, there is no better person than a map maker of these fields to give them a bird’s eye view of what these fields are about. For someone who is new to the field thinking “Okay, AI is nice but it is overwhelming.” What would you say is the rough lay of the land, how can people position themselves, and how do you help someone to better think of that field?

Richard Mallah: Yes, of course. So, AI safety and AGI safety encompass many things. There are many different metaphors we can use. It is a shield we hold, it is the rails we run on, and it is our navigation system. I would say there are only a handful of key facts here that tie together around where these risks are coming from. Firstly, we get what we ask for, not what we want. You can think of the story around King Midas or the Sorcerer’s Apprentice for a similar concept. What we ask for is usually expressed as a metric. Goodhart’s Law tells us that as soon as a metric is used to influence a system, then it becomes a bad metric. As such, the dynamics of a system evolved to contort and optimize to that metric. AI discovers creative ways to do things that we ask for. These specific ways can be thought of as instrumental goals being created to accomplish these. Meanwhile, most of the field of AI, computer science, and electrical engineering is focused on creating, powering, and expanding creative power.

So when that optimization power, and also what we call the action space range of the types of actuation a system can take, grows together then we have impressive, new capabilities. There can also be inflection points of completely qualitatively new capabilities that are actually pretty difficult to predict in advance, as we’ve seen with some systems manifesting the last few years. However, with enough optimization pressure, nearly all of the tasks one can have of the system generate a pretty similar plan for power, resources, independence, or maintaining itself. In order to gather resources and prevent its shut down, the system will try to learn of the circumstances embedded in it. When a system does realize its place in the world, it can then change itself and expand, which are issues of embedded agency that MIRI talks about. Essentially, we do not know how to reliably keep all of that in check. That is the endeavor of existential AI safety.

Allison Duettmann: Just a quick question: I did not know that FHI already had an AGI conference, did you say it was in 2012?

Richard Mallah: Yes, FHI and the Ben Gerson company co-hosted the 2012 AGI conference, which was very FHI and safety focused. It was at Oxford in December 2012.

Allison Duettmann: Wow. So how has your thinking on timelines shifted since then? Not to get too into the weeds a bit, but there have been things like the Robin Hanson and Eliezer Yudkowsky debate on different times and takeoff speeds. Also, recently I think Paul Christiano has taken a seat in this debate with Robin Hanson for the gradual AI capability increase versus the fast take off. From your perspective of having been in this area for so long, do you think these perceptions or these timelines in general are accelerating? How has that thinking changed?

Richard Mallah: Yes, so with people looking at this, timelines have shortened. My timelines were pretty short to begin with. I had 10 years of experience in the field at the time when I was starting to think of this, and then another 5 years of leading pretty cool AI projects in the industry. I was trying to map out the different types of synergies and paths we could take to get there. This is not something that the details should be shared too widely about, but I do realize that there are a lot of synergies between different threads of research, which will hit us faster than a lot of people expect. So yes, I think in the world at large, people do think that AGI is a more realistic proposition than they did 5-10 years ago.

Allison Duettmann: Yes, absolutely. Do you think there have been any other culture shifts in the general field of AI safety in particular? If I am now coming into space let’s say, would I find anything different as opposed to entering the field 10 years ago?

Richard Mallah: Yes, I think so. I think the concept of AI safety is much more mainstream now. A lot of that though is still in respect to near-term, narrow systems. However, the concept of AGI itself is becoming more mainstream, less so than standard AI safety. Nevertheless, I think a lot of people are realizing that we will have lots of issues. It is starting to click for a lot of people, so part of the hope in the field is getting more people to think of AI ethics, near term safety, or machine learning. One might be surprised though that people are dismissive of these issues. But yes, we are trying to get more people to realize more scalable types of safety are needed.

Allison Duettmann: That is very cool. Yes, I think most people would want AI to align with what they are hoping to get out of it. Then also, that doesn’t always mean that it aligns with other bits that are good for society at large. So I feel as though there is still an interesting mismatch there. I wonder if you could point out any parts that you still think are undervalued as of right now in the larger context? Or which areas do you feel as though some people have yet to open up to that you would like to draw attention to?

Richard Mallah: Of course. I will use this opportunity to give a preview to a few topics in the technical research plan that we are trying to push forward. I should note that all of these are inspired by the subfield of AI known as knowledge representation. So KR was a part of AI, before ML swallowed much of AI in the last decade and a half. Currently, ML is having a digestion problem for swallowing these pieces too fast, such as not knowing how to deal with them or use them. The first is extending environmental safety techniques, such as a self-driving car not crashing itself into a person, towards more metagenic safety like not deceiving someone. Right now, these are very distinct kinds of approaches, so we are working to create a spectrum there of similar technology. Secondly, we are working to establish meta objectives for more cognizant and more active management of sub objectives and instrumental goals, such as actively managing trade offs and mediations, as well as figuring out when it is the right time to look for synergies. Thirdly, modeling and parametrization of values as a modality, which includes images or texts, is another plan. To be able to parametrize across different types of systems, allowing values to be expressed or reflected at appropriate granularities, levels, and ways would be essential. Our fourth plan includes optimizing architectures for ideal componentization for both interpretability and optimizing propagation of alignment, especially inner alignment. Overall, I am seeking collaborators on all of these, and I will be hiring for these and others later in the summer.

Allison Duettmann: Oh wow, that is exciting. With hiring, what sorts of roles would there be? I do not know how much color you can provide, maybe none and that is fine. Nevertheless, how can people find out more about it?

Richard Mallah: This is only a preview, so we do not have these posted yet. However, there will be a variety of roles, such as technical machine learning, AI safety, policy, law, strategy, general research, and operations as well.

Allison Duettmann: That is definitely a great expanse of opportunities. It is very exciting and quite technical. How did you generate those areas that you focus on? Did you just look around and see an opportunity there, or was there another thought process behind it?

Richard Mallah: Yes, so actually the situation in the field is that proposed safety techniques are sort of piecemeal. We have some metagentic safety by design, addressing individual pieces of metagentic safety, and those are mostly aspirational still. Separately, we have some opaque, indirect value learning work that straddles the more easily stated or inferred parts of environmental safety ethics. Then, there is hardwired values by design, which is a lot of what the near term AI ethics community is working on. Also, these all share the paradigm of trying to optimize for a single, brittle objective.

So actually, we have a need for solutions that will cover the fuller breadth across metagentic safety, environmental safety, and ethics in a more holistic way to make them compatible. Then we can manage optimization pressures, instrumental incentives, constraints, goals, and subgoals explicitly with a clear, but dynamic, precedence hierarchy. Essentially, we can do all that in a way that is more fundamentally auditable by humans from the start, versus just the equivalent of biological dissection. All of that should also help competitiveness against less safe systems by hopefully being relatively efficient and by unlocking other useful interpretability and control features.

If these work out as hoped, then they can come together to do all of that. However, determining the extent and the limitations of that requires research. Even if some of these work out and some don’t, that will still help steer things towards better safety for prosaic systems. But yes, by mapping the space, I was able to look for white space, especially about how pieces can tie together and how they currently do not. I have also worked on over 100 AI projects in the industry, learning a lot by designing and deploying systems I feel are not well represented in a lot of theoretical safety discussions, with implications to scalable safety. So, it includes taking ideas from safety engineering, knowledge representation, control theory, and end-to-end systems architecture and combining them in new ways that will be surfaced here. Of course, many safety proposals do not scale for generality. There are many pitfalls to consider. So far, these have an argument for being scalable, but they require much more research. Nevertheless, they seem of high value, neglected, and tractable. These are just the samplings of my overall agenda; however, given the short timelines, we are aiming to fill these gaps.

Allison Duettmann: So is that what we can imagine your everyday to be like? Do you poke around sort of looking for these gaps you can fill? What does someone in your role usually do? Also, I am sure you read a lot. What is everyday life like?

Richard Mallah: I probably have an atypical daily schedule, because I work across technical, strategy, and policy. Most people in the field do not work across those three. It varies a lot depending on projects or programs I am working on. Nevertheless, most days include collaborating on documents, such as position papers, articles, critiques, or feedback on technical policy papers, and so forth. I also plan programs, do resource allocation, plan workshops, participate in standard discussions, and research crux issues, as you mentioned.

Allison Duettmann: Wow, so basically all of it. Wonderful, I think this gives people a better overview regarding the field of AI safety, from your vantage point. I think it is a nice dive in. I will now hand it over to Beatrice to lead us more into hopeful waters now that people have a better foundation of where we stand.

Beatrice Erkers: Yes, hello! I am a bit nervous of your potential answers Richard, given my outsider perspective. I personally notice that there has been an increase in urgency regarding the AI safety question. So, I am very happy to have an AI expert to ask these questions to, but still a bit nervous. Essentially, this project and podcast aims to identify what we are aiming for, versus solely focusing on what we do not want. I will dive in. Would you consider yourself an optimist about the future?

Richard Mallah: Yes, I would say so, especially when zooming out. I am optimistic, but it is contingent. I think we have it within us to reach awesome outcomes, but we have to truly make it happen. So I would say optimism and existential hope start with existential gratitude, which includes an appreciation and awe that we are here in the first place. It is amazing and unexpected. Whenever the answer is there, we are embedded in this universe having both individual and collective agency over it and within it. The future is not yet written, so we can continue changing outcomes for the better.

Yes, some problems are very hard and we may only get one chance. However, we have new opportunities to muster amounts of creativity that wouldn’t be needed at other times. That in itself is scary, but also exciting, including creativity on how to get incremental validation on these sorts of problems so it is not just a one shot deal. We can progressively be more certain of an existential safety of a safe AI. We can see promising paths, but we still need a portfolio approach and fresh ideas to crack this nut of existential safety. The field does keep growing, which is a hopeful sign. There are a lot more people to help work on this.

Collectively, we have accomplished banning bioweapons, vaccinations, the Montreal protocol on CFCs to help heal the ozone, as well as fixing Y2K before it hit. Making safe AGI will be much more difficult, but it is still within the same collective action problems we have achieved thus far. Yet, if and when things look up, the dynamics could take care of themselves, bootstrapping to become better and better. As such, I do not think it will be this Sisyphus situation forever. I hope the right situation of AGI will reach a point where there is a 98% align system, and then it will correct the last 2% itself.

Beatrice Erkers: That is great to hear. I am curious regarding if you think there is a resource challenge to make this happen, or more so a technical or value challenge?

Richard Mallah: There are certainly resource challenges. It is very much a race between capability, power, and the wisdom in which we use it. This is something FLI says often, and it is very true. We have a growing safety field, but the AI field is growing much faster and larger. So we certainly need a better balance.

Beatrice Erkers: Yes. One thing you mentioned as well was the term of existential gratitude. I have never heard of it before, but I will adopt it now so thank you for that. Could you share a vision of existential hope?

Richard Mallah: Yes, certainly. If AGI is done right, it can strike down the world’s biggest problems like dominos. It can unlock waves of technology for a better world, with cascades of positive externalities. It would start with things such as clinical trials taking only minutes in a computer, curing disease, aging fusion, and so forth. With energy abundance, we could store energy for planes, electric rockets, no more droughts, establish weather control, and create water abundance. Additionally, we could have a closed loop, zero waste economy. It is technically possible now, but we would get very little back in respect to the resources it would currently take. I actually had a start up looking into this a few years back, regarding taking waste of all kinds and extracting all the different elements. It takes a lot of energy and investment, but that will become much easier and effective in the future. We could also have nutritious food, remediate the environment, and combat climate change.

AGI could be this mediator and peacekeeper to a level that we have yet to encounter. Also, we would probably have global programmable matter and an infrastructure for that, having shelters, utilities, and everything we need in a manner of a few minutes. Also, leisure and reflection time for whoever wants it would be possible. So yes, generally people supporting each other, having time to learn, cooperating, and discussing different concepts would be possible. We want a fair allocation of resources and it can lift all boats. Having these tools to collaborate, understand, respect each other, and gain a knowledge for thinking will be possible as well. People will then be able to work on whatever they want to work on. I also invite people to think back to their best three days in their life, which would hopefully be what their everyday life could become.

Beatrice Erkers: That is a great practice for the concept of existential gratitude. It is also exciting to hear someone talking about all the possibilities with AI. I know you just had the World Build competition to envision positive futures with AI, so this is a good reminder to reflect on why we are pushing this. You mentioned a plethora of technologies here. Do you think there are any other areas we should focus on in order to build this future world you envision?

Richard Mallah: So AGI is the main pivot point here. If it is done right, there would be no reason that we cannot drop down those dominos mentioned. However, if it is done wrong, other technologies will not matter as much, or for too long, because we would be on a dystopian trajectory. However, aside from technical AGI and safety, there are sorts of technology and institution designs that are very neglected currently, and they will be relevant and key later to brighter futures with transformative AI.

Beatrice Erkers: Of course. Is there a specific breakthrough you think we could reach in the next 5 years to remain on track to this positive future with AGI?

Richard Mallah: A few things I can think of in terms of actually safe AGI include being able to describe ethical theory and create reinforcer learners to follow that theory. Additionally, being able to have enough interpretability to where a system knows it is not behaving in the way it is meant to and how we want it to is important. Just because it understands or is given an ethical theory does not mean it would follow it, so we should have the right kinds of interpretability to ensure it knows when it is not following it. I think these are some milestones that are quite doable and important in the near future.

Beatrice Erkers: That is what we need to drive more people to work on then I think. This is also meant as sort of a career podcast to show what needs to be done and what we can work on to get to these positive futures. We want to continue introducing these critical fields to young people. What would you recommend for someone new wanting to work on positive futures within the AI field? What should they specialize in?

Richard Mallah: I would say they should work on something that can also get us past the critical inflection point if they specifically want to work on positive futures because getting past that point is the hard part. Whether it is technical AI safety, beneficial mechanism design, or multiscale alignment, which includes aligning not just creators and AI but also the incentives creators and institutions face, these will be important. A combination of policy and real world institution building, including international relations, science, law, business, and systems architecture can all be relevant to this process. All of these backgrounds are helpful if applied to the right issues in context. Also, because of the shortened timelines to AGI, I would recommend a focus on object-level work versus field building.

Beatrice Erkers: Could you explain what you mean by that?

Richard Mallah: Object level work includes doing the technical research and direct policy interventions, as opposed to doing something very meta to increase the number of interested people.

Beatrice Erkers: That makes sense. It sounds like there are a lot of ways people can contribute given their skill-sets, so that is great to hear. Also, could you recommend any books, podcasts, or videos that would be interesting for someone getting into the field, either fiction or nonfiction?

Richard Mallah: Certainly. A great introduction book is Stuart Russell’s “Human Compatible,” which I think we have mentioned on these podcasts. I would also recommend Brian Christian’s book “The Alignment Problem,” which is really awesome. If someone also wants to delve a bit deeper, there are some recent papers including “X-Risk Analysis for AI Research” by Dan Hendrycks and another fellow, which have some good paths in there to follow. Also, Tony Barrett and others worked on actionable guidance for high-consequence AI risk management. The first is more related to policy and the second is related to technical perspective. I would generally recommend people check out the FLI website, which has a lot of high-level overview on there as well.

Beatrice Erkers: Yes, the FLI website is an amazing resource to get an introduction to many existential risks we are facing. Thank you! I am going to hand it back over to Allison to talk about the NFT, and I would like to hear more about the leisure and reflection time you mentioned. Thank you so much.

Richard Mallah: Thank you!

Allison Duettmann: Just echoing that, I think when I lack an introductory resource, the FLI resource is a great go-to with a bunch of links. I think it is a treasure trove of many topics. Moving forward, I think this is the hardest part of the podcast, in which we encourage everyone to get their creative side working. In the paper that led to the term of existential hope becoming well-known, by Toby Ord and Cotton-Barratt, they introduce the concept of a eucatastrophe, which is the opposite of a catastrophe where an increased value is created in the universe. This can be a very specific instance, so to encourage people to think of this, do you have any ideas of a eucatastrophic moment that would inspire you or make you feel like we’ve made it?

Richard Mallah: Yes, so firstly I do not like the term eucatastrophe. It sounds like something really bad is happening. I would feel as though it refers to something big that is going to fail. I propose we call it big good events, such as “anastrophe.” Also, cata and ana are opposites in the Greek sense, so it is less confusing. Anyways, such an event could be the big inflection for the world that we have been talking about. Once aligned AGI is created, solving both technical safety and coordination, as well as economic mechanisms included, it would create positive dynamics for everyone, knocking down societal problem after societal problem. Pieces of this new world would fit together, with much more abundance and opportunity to increase moral circles and get along better. We can also draw inspiration from the World Building finalists when thinking of specific positive events for the future. I am unsure if we have time for that, but I could give it a try if you want.

Allison Duettmann: Please do! I will post them here in the chat as well, because they are really wonderful.

Richard Mallah: Yes, so if we consider the top three finalists - which, by the way Anthony and Anna were on this podcast a few months ago talking about the contest and how it works. Since then, we announced the winners on June 30th, and I will talk briefly about the top 3 regarding my favorite elements. For the first place, entry 281, people bought into the need for alignment by demonstration of what could happen otherwise. It also mentioned work as being optional. While we will have a lot of free time, we still want to shape reality so we can work towards that. Thirdly, social simulations for certain policies would be possible through the government, to better understand how things would actually play out before implementing them. As for the second entry, entry 88, the first element I liked included the natural and built environments being in harmony.

Harmony is not easy, and it is something AGI could really help with. That project had also mentioned a space elevator, getting people to space cheaply and safely, which seems very useful. Thirdly, they mentioned making all plastics eco-friendly. We have some technologies today, but there is still so much non eco-friendly plastic which is crazy. Also tied for second place, entry 313, included alignment corroboration officers, which are basically people ensuring AGI is being ethical as the majority wants it. Even after alignment, it is good to have some quality assurance of humans always checking in on it. Furthermore, AGI demonstrations before them being rolled out helps to increase social confidence around the new innovations. Also, personal AGI assistance watching out for you could be a very useful thing. Ultimately, I encourage people to spend some time with the finalists on Worldbuild.ai. Also just try to imagine for yourself and change your world for the better.

Allison Duettmann: Wonderful, thank you. I think there is an enormous amount of project proposals which happen to be of high quality. They all have incredible diversity as well, so there is something for everyone. It is truly inspirational. We have about 2 minutes left. Ultimately, I think AGI alignment seems like a pretty obvious case for a eucatastrophe, which could help with so much. On a more personal note, is there any particular advice - it can be professional or personal - that you have been extremely grateful for throughout your life?

Richard Mallah: Yes, I had some teachers in elementary school and junior high that shared some good advice. I’ll share about three and a half proverbs here. The first is to invest in authentic relationships. In other words, be the sort of person you want to have a discussion with when doing so, and be the sort of person that you would want to hire when working. Secondly, you will regret what you didn’t do versus what you did do. There is sort of a bias here, because you could do something catastrophic and then no longer be here to talk about it. However, the saying still encourages people to consider the counterfactuals, as well as building confidence that they are on the right track. Thirdly, the perfect is the enemy of the good. What I will call 3.5, because it is very related, is premature optimization is the root of all evil. They are both quite related to AI risk and personal matters. You want to be sure you have the right balance of trade offs in pretty much everything you do.

Allison Duettmann: I love that. One that I came across similar to the second one is to take things lighter. In hindsight, you regret things you haven’t done, but you can also regret not fully appreciating the things you did do. The concept comes back to the existential gratitude you mentioned. It is like you are able to be here and do this, so it has really clicked for me to perceive things as I am able to do this, versus I have to do this. I am very happy we got to speak. Thank you so much for the treasure trove of ideas you left us with here today.

Richard Mallah: Thank you Allison and Beatrice, this has been fun.

Allison Duettmann: We really appreciate it and these are all beacons of hope. Thank you everyone for joining!

‍

Read