Podcasts

James Pethokoukis | Conservatism Meets Futurism

about the episode

Economic policy analyst and commentator James Pethokoukis discusses his book, The Conservative Futurist, and his perspectives on technology and economic growth. James discusses his background, the spectrum of 'upwing' (pro-progress) versus 'downwing' (anti-progress), and the role of technology in solving global challenges. He explains his reasoning for being pro-progress and pro-growth as well as highlighting the importance of positive storytelling and education in developing a more advanced and prosperous world.

‍

About Xhope scenario

Xhope scenario

James Pethokoukis
Listen to the Existential Hope podcast
Learn about this science
...

About the Scientist

James Pethokoukis is a senior fellow and the DeWitt Wallace Chair at the American Enterprise Institute (AEI), where he analyzes US economic policy, writes and edits the AEIdeas blog, and hosts AEI’s Political Economy podcast. He is also a contributor to CNBC and writes the Faster, Please! newsletter on Substack. He is the author of The Conservative Futurist: How to Create the Sci-Fi World We Were Promised, and has written for many publications, including the Atlantic, Commentary, Financial Times, Investor’s Business Daily, National Review, New York Post, the New York Times, USA Today, and the Week.

...

About the artpiece

This art piece was created with the help of Dall–E 3.

‍

...

About xhope scenario

James is aiming for a future driven by significant technological advancements supported via pro-progress policy. In his view, technologies such as AGI, nuclear fusion, and widespread access to genetic treatments could lead to a world with less disease, abundant clean energy, and economic growth, supporting innovation and prosperity across the board. His Existential Hope scenario is a society where individuals have the freedom to invent and pursue their own visions of the future, leading to diverse and vibrant communities living in harmony with these future advanced technologies.

Therefore, James worries that downwing policies which slow technological progress will mean “we're unable to solve the big problems that we should have solved earlier”. For him, “not taking any risks might be the biggest risk of all”.

Overall, he stresses the importance of maintaining optimism, embracing risk, and supporting a rich educational environment which encourages the pursuit of technological progress.

‍

Transcript

[00:00:00] Beatrice Erkers: Welcome to another episode of the Existential Hope Podcast, where we explore the mind submissions of some of the world's most forward thinking individuals. I'm your host, Beatrice Erkers, and today I'm very happy to say we have James Pethokoukis with us. James is an economic policy analyst and commentator, he has his newsletter called Faster, Please, and has written a book called The Conservative Futurist, which is very much the topic of discussion today.

[00:00:24] Beatrice Erkers: If you want a full transcript of today's episode, along with other recommended resources and other exclusive content, visit existentialhope. com. And don't forget to sign up to our newsletter to stay updated with the latest episodes and community updates. Now let's welcome James Pethokoukis to the Existential Hope podcast.

[00:00:41] Beatrice Erkers: We are very happy to have James Pethokoukis here on the Existential Hope podcast. Thank you so much for joining. You're the author of this book called The Conservative Futurist that we're fans of at the Existential Hope program and you also have this news that are called Faster, Please, that I've been subscribed to for a really long time that's also really good,

[00:01:00] JP: thank you.

[00:01:01] Beatrice Erkers: Highly recommend subscribing to that and yeah, very happy to have you here.

James Pethokoukis: Background and Career

[00:01:06] Beatrice Erkers: I would love to maybe just start with asking you to tell us a bit about yourself. Who are you and what are you working on?

[00:01:13] JP: My full time gig is at the American Enterprise Institute, a think tank located in Washington, D. C. Most of my... and I've been here for about, I don't know, 11 years... but most of my career has been as a working journalist covering business and economics. I've worked at Investors Business Daily, U. S. News and World Report Magazine. I worked for the commentary unit of Reuters, USA Today. So that's my background.

[00:01:40] JP: And somehow I ended up at a think tank where I cover economic policy from the framework of economic growth and tech progress and what policies. I'm not a technologist and I can't tell you which approach to machine learning or generative AI. My goal is to come up with public policies that encourage growth, encourage progress, encourage like an ecosystem that is good for technology and good for entrepreneurs.

[00:02:10] Beatrice Erkers: Yeah, that's great. I think that's very needed as well as being the one who knows the technology. So I don't know how familiar you are with our program, but this is the Existential Hope program and we're part of the Foresight Institute. And the Foresight Institute was founded by Eric Drexler and Christine Peterson. When I was reading your book, I was very happy to see that you actually mentioned some of Drexler's work in there.

[00:02:32] JP: Yes. Yes, indeed. Indeed. I, indeed, I did.

[00:02:35] Beatrice Erkers: Yeah, that's fun.

The Conservative Futurist: Book Insights

[00:02:37] Beatrice Erkers: But maybe you actually want to tell us just a bit more about your book. What is it about?

[00:02:42] JP: Yeah I felt there wasn't a book out there that was about why we can do better and how we can do better. Especially a lot of people who had books probably come out in the last year or so, they probably, they started writing it during the pandemic, probably because they had a lot of extra time on their hands because they weren't commuting. And I thought it was, and it was very tempting when you begin writing a book, then have it be really negative, at least at first, because when I started during the worst of the pandemic, and there was a lot of talk about like the United States, that we were a failed state and how everybody knew this was coming, but yet we still didn't seem prepared, and it's just like a lot of negativity.

[00:03:21] JP: But then- and it was very easy to absorb that- I'm not necessarily a particularly negative person I think inherently, I think I'm pretty optimistic, but then when I started looking around, there were a lot of reasons to be super positive. We were beginning to see advances in machine learning. This was before chat GPT, but there are already a lot of advances that people thought were going to be really good for progress and for productivity growth for further advances with genetic editing, with SpaceX and reusable rockets advances in nuclear energy. And then the fact that we were finally in this country, we were able to come together and have government and the private sector solve a big problem- coming up rapidly with a vaccine for COVID 19. Boy, I mean that... it would have been tough not to be pretty positive because I always like to say that like when you see like a campaign ad from a politician in this country, it seems to be like the last big public achievement they tend to show is the Apollo program, which was a long time ago. So I think we experienced another big achievement, which is to come up with a vaccine very quickly, save millions of lives.

[00:04:31] JP: So all that stuff together, when I was already starting to write a book, just reaffirmed to me that we needed a book that showed what went wrong, like, why we don't have universal vaccines and the coast- to- coast nuclear power plants, why we don't have colonies spreading across the system today like we thought we would have 50 years ago.

[00:04:49] JP: And I think more importantly, what are the economic and cultural reasons that didn't happen? Because I think they, they could have happened. Our decisions matter, which is maybe the biggest point from the book is that our decisions matter. What we do matters and we can create a better future for not just people in rich countries, but for everybody.

[00:05:08] Beatrice Erkers: Yeah, I agree. Yeah, I think it's, like you say, it's, it is easy to get absorbed by the more negative or doomy vibes. Yeah, I think one thing that really affected me was I was pretty young when Hans Rosling, like Hans Rosling, the... from Sweden and he was presenting on like how data shows that things actually have gotten better and I think that was really impactful. And so that's what, yeah, I've been trying to actually see if it's true and it does seem to be true, but maybe you could expand a bit more on some- cause you have few concepts in your book that you use, one of them is it's very recurring is upwing and downwing.

Upwing vs. Downwing: Political Spectrum Redefined

[00:05:44] Beatrice Erkers: Do you want to explain those concepts a bit more?

[00:05:46] JP: Especially being in Washington, D. C., there's this huge emphasis on this political spectrum, left wing and right wing, and trying to classify which views fall into the spectrum and therefore which politicians. Are they center right? Are they far right? Are they center left? Are they far left? But when I took a step back, it seemed to me like those kinds of differences didn't really explain where a lot of politicians were.

[00:06:12] JP: They didn't really explain why certain politicians seem to believe certain things. And the idea, rather, looking at what does a politician, what does a particular public policy, is it about enabling progress? Is it about in thinking that we can solve problems with good ideas, that technology is a good thing, and if we had more of it in all these key areas I mentioned, from biotechnology to computer science, to space, to energy, that we would be able to solve a lot of big problems? And I didn't come up with the pro- progress up versus anti- progress down formulation, but boy, it, to me, it really seems to explain a lot today. When people wonder, when people try to classify, say Elon Musk, it seemed like he used to code left and now he codes right.

[00:07:01] JP: I think that's exactly the wrong way of looking at it because if you do that, then somebody like Elon Musk is super confusing because he seems to believe in climate change and clean energy, which seems to be a code left thing. Yet he also loves technology and he wants to go to space and he's worried about like environmental regulations.

[00:07:20] JP: That seems like a right wing thing. What they are, are upwing things in which you try to look at what are the actual problems facing America in the world. What role does technology play in helping solve the problem? Even if technology may have contributed some of those problems, it could also then help solve those problems.

[00:07:37] JP: Like nuclear energy is perfect- if I know what you believe about nuclear energy I can probably predict a lot of your other views and a lot of these other sort of technological issues, and to me that's like a perfect upwing issue that if you believe that nuclear energy might be a pretty important part of our energy mix going forward, then you probably also think about, gee, why don't we have more nuclear energy, a technology, which we had 50 years ago. You might start thinking a lot more about the kinds of regulations we have. You might start thinking about why abundant clean energy is important. And you will probably think about technology as a way of solving problems such as climate change, such as having a cleaner atmosphere, such as providing abundant power to AI data centers, which now we didn't realize we were going to need abundant power.

[00:08:25] JP: Now we do, and of course you have some people saying that's why you can't have AI, but I'm like, that would seem silly, because it seems like it might be able to help us solve a lot of issues, and all we need is the power. It's about embracing a pro progress, pro technology, pro risk taking point of view, where you focus on, rather than better safe than sorry, it's: Not taking any risks might be the biggest risk of all.

[00:08:48] JP: So that was upwing and obviously downwing is like all the opposite, being worried that every reactor is going to explode and we shouldn't do anything for fear there might be some sort of disruption. And I think it's a very useful sort of framework for looking at, again, public policy.

[00:09:04] Beatrice Erkers: Yeah, I definitely agree it's probably very good to step away from some of these really old political stamps or whatever you want to call them. And, yeah, I think one of the things that I really liked in your book was also that you had a lot of examples of fiction, sci fi, and how they painted different pictures like dystopian or utopian type of pictures. I really liked also that I, you had examples that I never heard of because I've read a few of these types of books where you hear the same story again. Things have gotten better and then in the sixties, we got really excited with the Apollo space program and then that stopped and since then we've been doomy, is the story that I've heard many times. But you specified that a bit more, and I thought that was interesting. And so you had. We had this example of Walt Disney as the most upwing person, and I thought that was really interesting.

Walt Disney: The Ultimate Upwing Visionary

[00:09:53] Beatrice Erkers: Do you just want to tell the listeners a bit about what it was that made Walt Disney an upwing person?

[00:09:59] JP: Yeah, it's cartoons, and the amusement parks. It's, it was a very interesting character, because there was so much, a very American kind of character. Because there was so much nostalgia, really, in Walt Disney. And if you've been to theme parks, there's tons of nostalgia about, about the America he grew up in.

[00:10:18] JP: When you go to Disneyland, the first place that you walk into is sort of Main Street USA, which is an idealized version of the town that he grew up in. But with that, and I think this shows a real, tension that we still see in America, was a great love of the future.

[00:10:34] JP: The original sort of Tomorrowland theme land within Disneyland, Walt Disney absolutely knew that he had to have something in that park that spoke about what the future could look like in a realistic way. And indeed the first, and when Disneyland first opened up in the 50s, the tallest thing in the park was a giant sort of now cartoony looking rocket.

[00:10:58] JP: It was taller than Like Snow White's castle. That was like the centerpiece of the park. And when they did a video, when they were doing a television broadcast, they did it at that, they did it at that part of the park where you could see that rocket. And some of the early rides were about going to the moon, but they weren't science fictional.

[00:11:16] JP: They were based on what they knew at the time, what we could do like in the next 15 or 20 years. So it was a love of the future, a love of the future That was practical. And that, and those, and that theme land, Tomorrowland to me really epitomizes also what you were talking about. So what went wrong part of the story, because it was very easy back in the fifties and sixties to create a theme land about the future that we could have a realistic future, because there seemed to be so much happening, whether it was the atomic age, it was the space age, it was pretty easy to come up with ideas for what you could do in a theme park.

[00:11:55] JP: But then it became really hard because the space age ended and the atomic age more or less ended. And even Walt Disney was looking at Tomorrowland both eventually in Disney World and you see it in other theme parks as well, but he thought Tomorrowland looked like Todayland or Yesterdayland, that there was a lack of that sort of Futuristic enthusiasm and the, and there seemed to be like no energy in Tomorrowland, and that's what you saw in the world, which we became less enthusiastic about the future.

[00:12:26] JP: And we became focused on everything that could go wrong with technology. And you certainly saw that, as you mentioned, in a lot of the culture, you saw a lot of what Hollywood was doing. And I think that matters. And I think it matters what our themeparks look like. I think it matters what our films and TV shows and books say about the future because that with progress comes disruption in the job market.

[00:12:48] JP: If you have a dynamic economy where there's a lot of technological advances, You'll get new companies and the old companies will go by the wayside. Jobs will come, jobs will go. So there's a lot of disruption. And for people to think that disruption is going to be worth it, they have to think that there's a better tomorrow on the other side of it.

[00:13:07] JP: And I think people used to have an easier time imagining what that better tomorrow will look like. And now what does that better tomorrow look like? What do people think? Listen, I've done a lot of podcasts for this book. Not as good as this one, obviously, but I've done a lot of podcasts for this book and oftentimes like the, it'll be like the producer who oftentimes will oftentimes be like a younger person, younger than the host. And they'll say, you seem to be very optimistic, but nobody I know is optimistic, nobody in my age bracket. They think. That only the rich will get richer. And obviously the climate's going to get worse, and why do we even bother? That matters. And one reason, it's not the only reason why people think that, but one reason is they have never been given their entire life. a positive what realistic view of what a better tomorrow looks like.

[00:13:56] Beatrice Erkers: Yeah, I agree very much. And I think that's what we're trying to do very much with this program, it's like trying to figure out what it could look like, what would be exciting futures that are plausible, yet still ambitious. That's like the challenge or the trade off. Yeah. We, a few things we've tried is we had now a world building course that was trying to build a positive future with AI in 2045.

[00:14:19] Beatrice Erkers: And yeah, it's, it is really interesting. That's just 2045 so that's just 20 years ahead. But things could potentially, if we lean into it, be very different in a positive way, not just in the negative way that we often get caught up in thinking about.

Technological Optimism and AI

[00:14:34] JP: 20 years -a lot can happen in 20 years, and one of the websites I love to go to is the Metaculus Forecasting website. It's a communal prediction markets. And if you look at, fingers crossed, if you look at certainly a lot of the AI predictions and setting aside what you're hearing from the companies, if we're able to get Something approaching human level artificial intelligence, and we use that to solve problems much faster in a deeper way that we can currently do.

[00:15:02] JP: That's a very different world that we could have in 2045, or if it's only allows economies to grow. Let's put that aside. If we could grow. Let's say in the United States, our economy at two and a half or three percent year after year rather than the sort of two percent a year, which like the Fed and a lot of Wall Street forecasters, economic forecasters say year after year, you're talking about a four far richer, far bigger, far more technologically capable economy after just 20 years of that sort of compounding cumulative growth.

[00:15:38] JP: Listen, yeah, it could be a much better, which is why I feel like desperate to make sure that we have the right kind of policies. Public policies to ensure that growth and to make sure that we don't do something dumb at this moment to slow it down to some sort of AI pause or such a strict regulatory environment where it slows down. I think this is a really important moment. I, and that's why, as I wrote my book, I became more, not just excited by it, but like more frantic to get it written as quickly as possible. Cause I do think this is a special time that I don't want to waste.

[00:16:12] Beatrice Erkers: Yeah, it is a very special time.

Balancing Risks and Rewards

[00:16:14] Beatrice Erkers: You've spoken a lot now about the importance of taking risks and yeah, I think I would agree with that, but it's also, it would be really interesting to hear how do you think we balance like the optimism and excitement and trying to we try to build that, but also having, like, how do we deal with the risks and the challenges?

[00:16:33] JP: It's interesting. And as I wrote the book, like for me, the classic example was, is, nuclear power, which boy, we would love to have those thousand nuclear reactors right now that the Atomic Energy Commission thought we might have by the year 2000. And we wouldn't be talking about climate change, for instance, and we wouldn't be talking about how we can't supply enough clean power for AI data centers.

[00:16:56] JP: So as I was writing the book. Nuclear, the nuclear example is for the forefront of my mind. But by the time I was finishing the book, it became AI. Cause we were starting to see a very similar dynamic where, when OpenAI rolled out chat GPT in November of 2022. We had about a week or so of stories about, wow, this is neat.

[00:17:17] JP: With the image creators, you can create even regular people create really cool images. And with chat GPT itself, you could have Winston Churchill do a rap song. And it would be like, wow, that's actually pretty, so there's fun things you do with it. And after about a week, it became, oh, by the way, this is going to take all the jobs.

[00:17:35] JP: And when it's done taking all the jobs, it won't need us. And then it was kill us. And examples from the Terminator. And that, I think, extreme pessimism to me has really dominated the debate. The, what if this goes wrong? What if that goes wrong? And people will say: "yeah, maybe AI will help us solve cancer, yeah, maybe it'll create these amazing new materials, allow us to build a space elevator or something... but what about deep fakes? What about bias? What about all the, how can we ever rein it in if we have to?" I'm like, if there's one example, If there's one thing we have learned, I think we should have learned over the past half century, is that we're really good at slowing down technology. We know how to do that. That's what we've demonstrated, that we know how to take something like nuclear fission and make sure that we don't build a new nuclear plant in this country for decades. We know how to do it. We know how to take a space program and make sure that we shut it down so that we do not take any humans beyond low earth orbit since the early 1970s. So I think we've demonstrated that we know how to slow technology down and to be overly consumed with risk. What we need to demonstrate now is that we know how to think about risk in a way that doesn't create a stagnation. And it's amazing to me with AI is that we don't look at the 1990s where we had a brand new information technology and we lightly regulated it because we didn't know exactly all the good things it would produce, and we didn't want to slow it down when it was so early and evolving, and that somehow is not the example we've defaulted to with AI, but rather and the example is people thinking of it as a nuclear weapon.

[00:19:23] JP: I think a lot of that pessimism that we've been doing in since the seventies still remains. And I like to think my book is one small corrective, hopefully more to come.

[00:19:33] Beatrice Erkers: Yeah. I have a few things on this that I think are very interesting. One, one is just, I was in a conversation relatively recently about some space interested people, and they were talking about that we need to come up with some sort of new framework in terms of what risks we're willing to take with like space exploration, because now we're just taking, we're taking risk with hardware at most. And one of the things that was pointed out in that discussion was that Risk is very contingent. What risks we are fine with taking is like often depending on what historically has been maybe okay, or like that we're familiar with.

[00:20:10] Beatrice Erkers: I think the example that was said was we're okay with mountain climbing, but maybe not with like medical self experimentation, whereas they may have the same amount of risk, but one is a bit more controversial because it doesn't really have the same historical context or familiarity to it. So that's something that I thought recently was really interesting.

[00:20:31] JP: People are bad with risk. We're bad at conceptualizing very scary sounding things that are very rare. We're very bad at that. And there's, if you look at a list of well known cognitive biases, that's one. The fact that we have a natural caution, many of us. The fact if we hear about something bad, We will anchor on that and be less open to counter information once we've anchored on that sort of bad thing.

[00:20:59] JP: So we're fighting ourselves. That's, I think that's a really interesting example with the mountain climbing example. We're constantly fighting ourselves to just be able to conceptualize risk. And I think, again, I think we're seeing it right now with AI where we've all been inundated with negative images about advances in artificial intelligence our whole lives. So we default to those images and it's very hard to counter those with sort of facts or even a very simple question, which is if we don't know if this is a fast evolving technology, how can we possibly preemptively regulate what it might, what the risk actually might be a year from now? Or two years from now.

[00:21:48] JP: Isn't it better to rely on what we are, the sort of laws and rules we already have, and we're just going to have, not just because it's preferable to react and deal with problems when they come up. It's because there's no other choice. Look, with a pandemic, which I think, which should be a powerful example of this... we've had previous pandemics in the 21st century, some quite severe with the avian bird flu, we've had a gazillion conferences producing a gazillion white papers about the risk of a pandemic and what we should do about it everywhere.

[00:22:22] JP: You've had public figures leading up to the pandemic talking about the risk of one. All our presidents have been well briefed here in the United States. And we've had our culture warn us about pandemics. Now, oftentimes it's warned us by saying the pandemic would turn us into zombies, but there've been a lot of pandemic kinds of movies.

[00:22:41] JP: And yet, we weren't really prepared for this pandemic. What we did have, or what we saw was valuable is that once the pandemic was here, that it was really good to be a rich, technologically advanced nation. So you could react in real time to when those risks finally emerged. That, that all the preparation and all the white papers weren't as important As being a country that could create and deploy a powerful new vaccine.

[00:23:12] JP: And I think that lesson about the power of being technologically advanced and that, like time is of the essence, like what other thing will happen that we wish we had not slowed down technology so we could be able to deal with it, whether it's planetary defense, whether it's a next, the loss from not being more advanced, unfortunately, you only perhaps perceive that once it's too late.

[00:23:37] Beatrice Erkers: Yeah, I think it... I assume it's very good to be proactive and think about what challenges may be ahead so we can be proactive in solving them. And probably the biggest challenge is like balancing that it shouldn't be all one or all the other.

Existential Hope and Future Technologies

[00:23:51] Beatrice Erkers: And I think one of the things that I really liked about your book was the chapter where you speak a bit about, for example, Toby Ord, who he actually is the one we've taken the term existential hope from because he wrote this paper called Existential Risk and Existential Hope. So very much relevant to what we're talking about today. And, yeah, in, in that paper they speak about, yes, we should probably work to decrease existential risk, but should we also work to increase existential hope? And then you had some ideas for that.

[00:24:17] JP: By the way, I've interviewed Toby for my newsletter and my podcast very much. One of the things I like is that it's really important to think hard about what the risk actually is. Is that we know what is the actual risk. It's so easy to say that something's existential and we know humanity is at risk. Not really. That is actually in many ways will be very hard to wipe out humanity.

[00:24:41] JP: If you gotta, so you have to begin with figuring out like what is empirically the best you can figuring out what the risks are. I think one, one, I think one way of dealing with sort of our natural eyes are our natural fearfulness are some of the facts. And that's- Toby does a great version, a great job with that in his

[00:25:00] Beatrice Erkers: Yeah, I agree. It's very obvious in that book that he thinks that there are many technologies that are very risky to us, or yeah, could pose great risk. But you propose that technology is also like our only chance, or Savior as well. And yeah, we need technology for civilizational resilience as well. Could you expand a bit more on that?

[00:25:20] JP: We talked about that. There's so that it's tough finding like media that explores these themes, but one which does, I think accidentally, there's a great William Gibson book called The Peripheral, which was, which later I think became a series on Apple, which I think it was, might've been Apple, might've been Netflix, which was a pretty good series. And the scenario there, it takes place in the near future than 100 years from now. And between sort of the near future, maybe the 2030s and 100 years from now, spoilers, everything went wrong. Nothing major, but lots of things went wrong. We had a bad pandemics and climate change and a limited nuclear war, and it resulted in 80 percent of humanity died. And as the world was falling apart, and I will remember this phrase from that book for the rest of my life, he wrote, William Gibson wrote, then science started hopping. In the middle of all this, we started figuring out AI. We started figuring out nanotechnology. The Eric Drexler nanotechnology. It was a little too late. Yeah, humanity survived, but what if we would have had those tools and that's what technology, these are tools at our disposal a little earlier?

[00:26:36] JP: And I, heaven forbid that we should have had the COVID 19 pandemic in the nineties where it would have been far more difficult to figure out what the pandemic was. We would not have had the mRNA vaccines. We wouldn't have been able to work at home so easily. Thank goodness it happened when it happened. And I would hate for something else to happen and us to think, but boy, maybe we shouldn't have over regulated this technology so much 10 years ago, or maybe we should have spent more money on basic research 20 years ago.

[00:27:06] JP: And it's that to me that is the risk. I worry that we inadvertently slow technology down and we're unable to solve the big problems that we should have solved earlier.

[00:27:17] Beatrice Erkers: Yeah. I think also you mentioned a few of the technologies that you think are the ones that you're potentially most excited about.

[00:27:25] Beatrice Erkers: Would you maybe want to tell the listeners also about what technologies are you most excited about right now?

[00:27:31] JP: I'm not sure any of these will be stunning to people. When I talk to actual technologists it's hard. Some of these are like off the record conversations. So not in my podcast, but when I'll talk to people in other venues and they're the enthusiasm about guy.

[00:27:45] JP: I mean that now people are sick of hearing about it, but it's just begun. So I'm super enthusiastic and I don't think it's going to end up being some sort of a junk technology that businesses will never figure out how to use. I think the history is that there's often a lag between when a technology kind of emerges, between the time where we figure out how to really make it useful, and that could happen again.

[00:28:06] JP: That's what we've seen with the PC. That's what we saw with the internet. But I feel like really confident that like the worst case scenario with AI is that It's only as important as the internet. So if it's only as important as the internet, it's still pretty important. I tell you, the other one, and when I, between what, the difference between when I talk to sort of technologists and when I talk to just like regular people, is that there is a huge misunderstanding about where we are and what we're doing with genetic medicine and where we are with space, that the dramatic advances CRISPR and mnRNA with solving diseases that people will begin to hear about over the next, now to the next five years. I think could be fairly astonishing. We've already seen potential cures for sickle cell disease. But the fact that it's costs also on the space side costs so much less to get stuff into orbit and what that opens up.

[00:28:59] JP: That was always, that's been the missing piece of the puzzle for like, why didn't we have the space age that we dreamed of? And there was nothing that followed Apollo. It was a cost issue. And we decided as a country, we did not want to spend that much on a space program. Now, all of a sudden those costs have gone down by 90 percent and will go down further Still. So that is an amazing advance. And to think that there are numerous startups working on nuclear fusion right now. Again, I think an undercovered issue. I just did a podcast for my newsletter with someone from the government's nuclear office and department of energy, super enthusiastic about not just advanced nuclear fission, but nuclear fusion, that these technologies would seem science fictional.

[00:29:42] JP: Some of the maybe it seems science fictional like the day before I started writing this book now seem like we can have actual serious conversations about. So when we talk about what would the world look at 2045, we could have human level AI, we could have nuclear fusion reactors, we get to have a sky dotted with space platforms, some various cancers and Alzheimer's could be far more treatable than they are today. That to me seems like a better world. I don't know what your vision is, but to me, my vision, if that's what we're looking at 20 years ago and the economic growth and even more people globally come out of deep poverty, I think that's a better world.

[00:30:22] Beatrice Erkers: Yeah. And I think the technology that you mentioned, I was also happy reading it because, yeah, we just worked with a lot of these technologies at Foresight, like biotech and, Nanotech, you mentioned Drexler, you mentioned space also.

[00:30:39] JP: Yeah. All the amazing progress that seems to be happening. I don't know if I, and I, and boy, I probably underemphasize it, but to be able to produce lots more energy and a lot more clean energy, like that is a different, that is a different world. To be able to produce energy cleanly perhaps- if you look at geothermal in places we never thought we could, that is a sort of a scenario. Listen, in the mid 2000s, there was concerns about peak oil and we were running out of energy and we'd have to abandon the suburbs because of high oil prices. Now we're sitting here where not only did that didn't happen, but you can see like all the avenues that we have for clean energy abundance. And the only thing really, it seems stopping us from going down that avenue. Those avenues are our decision that we want to go down the avenues and we're willing to apply the sort of the funding and the entrepreneurship toward going down geothermal and nuclear and solar.

[00:31:41] JP: If we don't reach that future, I think fundamentally it won't be a technology issue. It'll be an us issue that we decided for whatever reason not to do that. And I think we'll decide to do that: that, hey, more clean energy is better! It shouldn't be a difficult decision.

[00:31:55] Beatrice Erkers: Yeah. You do go through a few, I think, positive, sorry, potential solutions or like proposals.

[00:32:01] JP: By the way, I would view diet coke, which I'm currently then drinking for a dry throat, also fantastic under underrated innovation, but I'm sorry. I agree, actually. I think that's something I've been thinking about with lately. We have so many diet, new diet medicines that seem to be working really well.

[00:32:17] JP: And I, people seem really happy. And yeah, I think in the future we'll have more of this like type of hacked food that you can eat, but not get overweight. And we haven't even mentioned, and it's these, the Ozempic and other similar medicines, which apparently aren't just weight loss medicines, but they seem to be everything medicines that may help us reduce kidney disease, heart disease, various addictions. So that a breakthrough that we weren't really talking about now. Everybody is talking about, again, it should be another example that we have these problems, but if we apply intelligence to those problems, and AI will help us apply that intelligence, that things would seem like forever problems are solvable.

[00:33:00] Beatrice Erkers: Yeah, let's hope so for sure.

Upwing Education and Storytelling

[00:33:02] Beatrice Erkers: And in your proposal list of what we can do to build this more like upwing society that you're talking about, one of the things that you suggested was like, I think invest in upwing education. And I'd be really curious to hear you expand on that a bit more because I think oftentimes when we ask people what should we do more of to create this more like existential hope future that people often say invest in more education, but it's just often a bit unclear to me okay, like what exactly then? So yeah, do you have any ideas?

[00:33:30] JP: Yeah, I think that's super important. And I think I cite some statistics about if in the United States, if we could just replace like the worst teachers with like average teachers, like what a massive economic impact that would have. But the problem, my face was like, how do you, where are you going to find these people? Are there, you need a lot of teachers. And what really excites me is the, is there was a, there've been some initial studies on AI in the workplace and it seems to help really like the worst performers become average performers. And I can't help but thinking if that's the result we get by figuring out how to use AI in the classroom is that we take our worst teachers and turn them because now all their kids will be aided by AI tutors that will know exactly like their strengths and weaknesses.

[00:34:23] JP: If we are able to insert that in the classroom and all our worst teachers become average teachers. Again, that doesn't seem like a massive change, but it absolutely would be a massive change. So we have to make sure that technology is able to get into the classroom. So I think that's one thing. And this, so that's like a techie thing.

[00:34:45] JP: Here's the non techie thing, stories- having, exposing kids to stories of human achievement and progress, of people making decisions that create a better world. A great example, because it's a movie and it's a book, and I found out that we are seeing teachers use the book in the classroom, is the book The Martian, which is also a wonderful film with Matt Damon. About an astronaut stranded on Mars who has to start solving problems. He has to start innovating. It is a wonderful upwing film and it's a great book. And that's the kind of story that should be taught in the classroom. Because again, we have to be able to envision a better world and a world that is susceptible to change to our decisions, that we can actually change things through doing things and making better decisions and exposing kids to that. I think throughout there, listen, there's been always, there's a lot of controversy about curriculum and what we teach kids with history. And I think kids should be taught as accurately as possible how we got to where we are.

[00:35:54] JP: But you can probably go through a lot, and I've had kids go through the public school system and they know almost nothing about the industrial revolution which took us from, which if you look at a chart, it seemed like aliens came down and gave us technology, the increase in wealth and progress. They know almost nothing about the industrial revolution. They know almost nothing about the great entrepreneurs. So I think you want to teach kids about that, that not only you want to teach them sort of STEM subjects, but you want to teach them the history of progress, both in a nonfiction way, but also great works of fiction that are great books, but also I think teach an important lesson about progress.

Visions of an Optimistic Future

[00:36:31] Beatrice Erkers: Yeah, I think that's actually a good segue to another part of this interview, which is, I want to ask you a bit more about, yeah, what do you imagine for an existential hope future? And yeah, we do try to tell stories with these podcasts, for example. So one of the things that we do is we always ask for a catastrophe, but it was in this Ord paper that I mentioned.

[00:36:52] Beatrice Erkers: So it's the opposite of a catastrophe. It's like something that happens and the world is- we're have more value in the world. And then we try to use this prompt to create an art piece to like actually try to show this exciting future. So if you were to propose something that we should make an art piece out of that you think would be a great event for the world, what would you propose?

[00:37:16] JP: See, because I do a lot of economic policy analysis, like, when you start talking about that, the first thing I think of is, oh, productivity growth doubling to 3%-, go make an art piece out of it. I've tried and Mid Journey does not give me a good response. It's one thing, and this is a bit of a side topic, one thing that I think is so encouraging is that I'm very disappointed with the ability of our culture, particularly our popular culture, to create these kinds of images.

[00:37:44] JP: Because we now have ever improving tools to create the images ourselves so that even someone like myself who is not particularly artistic can begin to create interesting images and videos about what that future so we do not need to rely on a Hollywood studio or to come up with what I would call an upwing kind of film and that you see them every day you see just Fantastic images and more and more videos being able to show that kind of future.

[00:38:15] JP: Now that said, and this is a big point of the book is I'm not saying what this future should specifically look like. I hope there are lots of interesting images and visions of a world with less disease and the ability to conquer the solar system and flying cars, even if you choose, that's up to us. And so I'm not for a government department of the future, where they'll be in some giant room with lots of flat screens, planning with- no. I think if you give people the freedom, and the tools, and the opportunity to pursue their vision, that the collective result of that will probably be something pretty fantastic.

[00:38:59] JP: I like, I have enough faith in people. That's why I have a lot of faith in the American economy if we continue to do. That's why I'm very pessimistic about the Chinese economy, which they don't seem to want people to have the freedom to pursue those visions independently. So as far as coming up with an easily- I want people who have that skill to do that and thankfully they will have the tools to do that, to create those kinds of images, but in a way it's, what's the old saying: that all happy families are the same, all miserable families are different.

[00:39:30] JP: And in a way, I don't know, maybe all these visions, positive visions of the future will end up looking alike. Where all the negative images, like that's where all the action is. Plague, nuclear attack, asteroid strikes. And those make for fun movies because the combination of a bright, clean, green future that's- I'm sure there's different versions of that, but all those seem pretty, pretty good to me.

[00:39:54] JP: And maybe there's a solar punk version, and maybe there's something that seems more urban and technical. But I think there's room for lots of different visions that hopefully what they all have in common is that they respect sort of our humanity and our ability to decide our own future. And that's, I guess that's my vision. So I'm not sure that makes for a good image though.

[00:40:13] Beatrice Erkers: I think it's interesting and it does relate to, I think what a few of the other proposals have been that we received. And also when we've looked at futures that we're excited about it's often futures that offer opportunities or options.

[00:40:30] Beatrice Erkers: Yeah, it's a future where you can go live somewhere based on your personal preferences and the society there fits you and yes, optionality.

[00:40:41] JP: And listen, if you want to live in a house made of fungus, which that's great, but if you want to live in a five mile tall skyscraper, like that should be an option. If you want to live in space, if you want to live on Mars, if you want to live at the bottom of the ocean, I don't know, like what a lot of those are. I guess maybe they're called retrofuturist. I want there to be the opportunities for all of us to invent the future that we want. And hopefully very artistically minded more so than me can help create more images of that future in organizations like yours.

[00:41:14] JP: Those stories and images are important.

[00:41:16] Beatrice Erkers: Yeah, the Sora videos that have been released so far from OpenAI look just amazing. I really can't wait to use it. I hope it becomes more publicly available soon.

[00:41:28] JP: I can't wait to put on goggles so I can walk into them right now.

[00:41:31] Beatrice Erkers: Yeah.

Final Thoughts and Best Advice

[00:41:32] Beatrice Erkers: This question may be, I'm not sure: are we on the right track? Do you think, are you optimistic about the future?

[00:41:38] JP: I am, but that's my bias. So I always have to think about, am I being realistic? And I think there have been moments where things seem to be coming together and then didn't. Boy, I thought one of those moments was really the early seventies where you had exciting developments and a variety of technologies.

[00:41:57] JP: The sort of the visions that people had about what the next 50 years would look like didn't really happen. I'm glad I live now, not in 1970s, but I think considering like what this world could look like it has underperformed. I think the late nineties was like that, which you saw all this enthusiasm generated by the internet but I don't think the next 20 years look like what people in 1999 thought it would look like.

[00:42:22] JP: And I get into this, it's for reasons of difficulty, some of which I think were macro reasons and, but I think our decisions as well. So I think we're at like another point, which we have a lot of this interesting cluster of technologies that we're trying to figure out what to do with them, how to regulate them, how to fund them.

[00:42:39] JP: And I think, I hope that the promise that of these technologies and the fact that we should be tired of economy that could be performing better. I hope all of this will prevent, will provide the tailwind that we don't miss this moment the way we have in the past. I'm super, I'm extremely, I think, optimistic, but again, that's my bias, so be forewarned.

[00:43:04] Beatrice Erkers: Yeah, I think it's my bias as well. I think it would be, do you have any other recommendations for our listeners in terms of: because we've spoken so much about the importance of stories and do you have any recommendations of favorite stories? You mentioned the Martian, but are there any other ones that you think are like very inspiring for the future?

[00:43:23] JP: For me, to me, one of the most, to me, one of the most emotional, inspiring more recent films is the film Interstellar by Christopher Nolan, which shows a world that rejected technological progress and then all of a sudden faced a problem and couldn't solve it and finally realized that they had made a massive mistake and had, and at that point had to figure out how the human race could survive. So that is a movie which to me is one of the most upwing. There's a famous line in that movie about we were born here, we weren't meant to die here: so much great stuff in that film amongst our recent sci fi films.

[00:44:05] JP: Certainly, I think the Martian and... but you can look at a lot of media and think that there's this hidden upwing message inside there, like I mentioned the book and TV show, The Peripheral, which was a very pessimistic story, but boy, if only in that world, they had been a little more aggressive in pushing technology forward earlier, then maybe the bad things don't happen.

[00:44:26] JP: So I always look for those various moments, but yeah, I would like to not have to look so hard.

[00:44:31] Beatrice Erkers: Yeah, I think it would also be interesting to just sneak more futuristic content into kind of normal stories, like love stories or stuff like that. I think the movie Her is a good example of that.

[00:44:43] Beatrice Erkers: The main drama in that movie is more like relationship drama. Of course, he falls in love with an AI, so I guess that's a bit out there. But yeah, just, yeah, just like those types of visions would be interesting to see too.

[00:44:57] JP: Yeah, I think that's probably going to happen more and more. I'm not saying everything has to be positive, but I just think we need to correct a little bit for the imbalance, and one thing that makes, really makes me enthusiastic is something that wasn't possible back during the nuclear era in the 70s, is the ability for people who are enthusiastic, who are optimistic to get their message out there, whether it's on social media and podcasts, you would have never heard of those people before: they had no public voice. Now they can go and they can tweet about it and venture capitals can tweet. And we, and if you, and you can just start up your own podcast and get that message out there. So the ability to send this message to a wide audience is available like never before, which is also, I think, a huge tailwind for people who think we can create a better future.

[00:45:42] Beatrice Erkers: We're actually at the last question I will ask you now, so we can wrap up. Just a very general question of what is the best advice you ever received?

[00:45:51] JP: The best advice I ever received. I think, listen I came from a very working class background. There are... There were very few people with my interests. I think the best piece of advice I received was: none of this, none of that matters, what matters is what you do. That people are born with advantages and disadvantages but we all have it our power to create like a better future for yourself than you would otherwise have, that what you do counts. And I think that's true for individuals and I think it's true for societies- that we can, , all these ideas about culture and economic policy in the book, I don't, again, I don't know what exactly is all going to look like at the end, but I feel pretty confident that it can be better. So I don't, so when people have their own individual dreams and aspirations, I don't know what it's going to look like at the end of the day.

[00:46:40] JP: And I always tell my own kids, the most interesting jobs haven't been invented yet. So I think if you just have an open mind to possibility and assume that your decisions can create a better future and just go from there it'll turn out okay. I'm not sure that answers your question, but let's try it. It's how I try to think about things.

[00:46:59] Beatrice Erkers: I think it's a great answer and a great way to end this podcast.

Conclusion and Farewell

[00:47:02] Beatrice Erkers: So thank you so much, James for coming. And yeah, looking forward to reading more of what you do in the futures.

[00:47:07] JP: Thank you so much Beatrice, thank you so much for having me on.

[00:47:09] JP: Thank you so much for listening to this episode of the Existential Hope podcast. Don't forget to subscribe to our podcast for more episodes like this one. You can also stay connected with us by subscribing to our sub stack and visit existentialhope. com. If you want to learn more about our upcoming project events and explore additional resources on the existential opportunities and the risks of the world's most impactful technologies. I would recommend going to our existential hope library. Thank you again for listening and we'll see you next time on the Existential podcast.

‍

Read

RECOMMENDED READING