Podcasts

Jason Crawford | How technology expands human choice and control

about the episode

Our fast-paced world isn’t spinning out of our control; we’re actually becoming more capable of steering it than ever before. Throughout history, technological progress has expanded human agency, that is our ability to choose our destiny rather than being subject to the whims of nature.

Jason Crawford, founder of the Roots of Progress Institute, joins the podcast to discuss The Techno-Humanist Manifesto, his book exploring his philosophy of progress centered around human life and wellbeing. 

In our conversation, we dive into the core arguments of the manifesto:

  • How we are more in control of our lives than ever before
  • Why we should reframe the goal of “stopping climate change” into “controlling climate change” and work toward installing a “thermostat for the Earth”
  • The value of nature and its interaction with humanity
  • Allowing ourselves to celebrate human achievement and industrial civilization
  • The concept of “solutionism”, as a kind of optimism that acknowledges risks while keeping a proactive attitude towards solving problems
  • Why two common fears around the slowing of progress – that we could run out of natural resources or new ideas – are actually unfounded
  • The possibility that AI represents a transformation as significant as the Industrial Revolution or the invention of agriculture
  • How to rebuild a culture of progress in the 21st century, from reforming scientific institutions to creating new, non-dystopian science fiction

About Xhope scenario

Xhope scenario

No items found.

Transcript

[00:01:30] Beatrice: I am really excited today that we have Jason Crawford here on the podcast. Jason has actually been on the podcast before, but I wanted to have you back on to speak specifically about this one that you've written: Techno-Humanism. It's The Techno-Humanist Manifesto. This is not the full book—I know that this is just the first chapter—but you can find the whole thread on Substack, and it's also going to be a physical book from MIT Press, I believe.

[00:02:01] Jason: That's right. So, The Techno-Humanist Manifesto is an essay series. It's on The Roots of Progress Substack right now, currently being revised into a book for publication. And, yeah, what you're holding there is a little teaser that's got the intro and the first chapter.

[00:02:16] Beatrice: It's very nice. We got this at the progress conference that you guys hosted.

[00:02:20] Jason: Yes. Very limited edition run handed out at the progress conference and published by Big Think magazine.

[00:02:26] Beatrice: So nice, with all the pictures and everything. I love a book with pictures, I would say. Yeah. And yeah, I basically wanted you back on to talk about this because I thought that there's often this clash between people who really believe in technology and people that are—perhaps have a bit of a harder time adopting it. And I think that this just really holds that balance and the challenges of it. And so that's why I want to dive into this, basically. So let's just start: What made you feel the need to write a manifesto?

[00:02:58] Jason: Yeah. The manifesto is my philosophy of progress, laid out and stated simply, and the core idea is pretty simple: Science, technology, and industry are good. They are forces for good, but the reason they're good—the justification for why they're good—is human life and wellbeing. So it's a combination of holding up humanism, holding up human life and health and wellbeing as our North Star, but then seeing technology and industry as a means to that.

[00:03:30] Beatrice: Yeah, and I think that you are obviously, I would say, one of the founders of the progress movement. And I think that one of the things that I wasn't expecting—to feel like there was that much new stuff because I've been following what you've been doing for a while—but there were things that I felt were new to me or just like very crisp summaries of the ideas, I think. And I think the first thing that I would like to dive into is this idea where you speak of progress as the expansion of human agency. Could you just talk about what you mean by that?

[00:04:10] Jason: Yeah. So when I say human agency, I mean our ability to control our lives and our world, and our ability to make choices and to choose what happens to us, what happens to the world, to control our fate or our destiny.

And one of my core contentions is that I think human agency maybe is also, in some sense, a North Star for what progress means and what progress does for us. And I think that human agency in almost all dimensions has been greatly increasing over time. And I don't think this—I think this is a little non-consensus. I think this is something that some people might be surprised by or push back on. Some people feel like the more the world moves faster and is more and more fast-paced, oh, maybe we're losing control. A number of authors over the decades have expressed this idea that, oh no, the more things speed up, the more they're going to fly out of our hands; we're going to lose control.

When I look back over the millennia and tens of millennia of human history, what I see is that actually we are more in control of our lives and our world than any of our ancestors at any time. Even though our world moves faster, we have even better ability to control it. So we are more in control of our fast-paced world today than Bronze Age kings were in control of their relatively slow-paced world, and even they were more in control than tribal hunter-gatherers were in their extremely slow-paced world in the Stone Age.

The reason is that just as technology and industry speed things up and accelerate the pace of change, it also gives us more and more tools to deal with change: to know what's going on in the world, to understand the implications, to communicate about it, to coordinate on a response, and then to do something about it. So I actually expect that the faster-paced the world gets, the more we are going to be able to control it, to steer it.

[00:06:16] Beatrice: And yeah. What do you think we should do with that agency? Like, what is the ultimate goal that we should pursue?

[00:06:23] Jason: In some sense, anything we want. It is up to us, and it's up to each individual and us as a society to decide where we want to go, to chart our course and steer our direction. But I think we need to make those choices consciously, and I hope that we will use the newfound agency to live healthier and happier and more meaningful lives.

I think that as technology and industry have progressed, as we've gotten wealthier and more and more choices have been opened up to us, I think we're at an age where you and everyone alive today has more ability than ever before to choose a rich and fulfilled and meaningful life. We can choose what work we want to do that fits our personal inclinations and talents. We can choose what we want to do in terms of who we want to marry, how many children we want to have, where we want to live, what kind of lifestyle we want to lead. We can surround ourselves with art and music and aesthetics to express our own personal taste and lifestyle. We can enrich ourselves; we have access to pretty much all of the world's knowledge, art, philosophy, music, and entertainment. So we can fill our lives with beauty and with joy.

And so I think all of these things are out there now. I think there are some signs that maybe people are taking even less advantage of this now than in the past, right? So maybe people aren't filling their lives with love and family and children. Maybe they aren't reading as much as they used to, right? In some ways, aesthetics have gotten worse, right? Our world is not as beautiful as it used to be. People talk about this a lot in architecture, but it's also, I think, true in maybe fashion, in automobiles—you could argue in visual arts and music. But I think this is a product of our choices, right? And this is a product of what are we doing with the great choice now open to us. And so I think what's happened is technology and industry have opened up all of these choices; now we need to find a better place and a better equilibrium to be within that.

[00:08:36] Beatrice: We need to find our footing or like our priorities, basically. Yeah. What do you think—what should we be doing now in terms of concrete next steps? What would you like to see in trying to figure this out, basically?

[00:08:52] Jason: Yeah, there's a lot to do. And personally, I feel like I've been able to achieve a lot of meaning and joy in my life, and so I've been thinking about whether that's something that I can help other people figure out how to do. I don't know if I'm the best person to figure that out, but let's not lose sight of the fact that also there's a whole lot more progress to be made.

The book and the essay series are split into three parts. The first part is basically the value of progress. The second part is the future of progress: Can progress continue? And then the third part is the culture of progress. And so I think a lot about those last two things. So one is: What is the future of progress? There's so much more progress to be made. I look forward to a future in which we cure aging and death, in which we can zip around the world in supersonic planes and maybe blast off to other moons and planets, in which we have nanotech manufacturing and genetic engineering—and all of it powered by maybe solar and nuclear energy and geothermal and other forms. There's a lot of that. And part of what I want to do is get people excited about that future and inspire people to go work on it.

[00:10:07] Beatrice: I think, yeah. So if we deal with the first part—on the criticism that I think most people might have when you talk about like, "we should expand agency"—is maybe climate as a counter-example of, "Oh, we think that this is an example where humanity has damaged the planet more than helped it." And I think that you actually make a bit of a case there that it's not about stopping climate change; it's about giving—building something like a thermostat for Earth or something like that. Do you want to expand a bit on that?

[00:10:46] Jason: Yeah, that's right. So, I have a chapter in the book, and there's an essay on the site titled, "We Should Install a Thermostat for the Earth." And what I do in there is I reframe the goal from "Stop Climate Change," which is a framing based on human non-intervention in the environment.

So let me back up a little bit. A little earlier in the book, there is an essay where I talk about our relationship to the environment. And one of my maybe more radical stances is that I am an unabashed anthropocentrist. I believe that the value of nature entirely derives from its value to humanity. That's partly its utilitarian value, partly aesthetic value, psychological value. So we can have an expansive concept of what is the value of nature to humanity, but I don't believe in the intrinsic value of nature. I don't even really think that's a coherent concept.

When it comes to—so I'm very much against the non-intervention framework that says we have to limit our impact on the environment. I think we need to have thoughtful impacts on the environment that are good for humanity, systemically and long-term. But ultimately, that impact on the environment per se is not necessarily a bad thing.

So when it comes to climate change, my framing is that rather than take a non-impact framework and just say we should not change the climate, I think what we should do is we should look at this and we should say, "Oh, we're having an inadvertent impact on the climate—something we didn't plan or choose, something that sort of came about as a side effect." And increased human agency means being aware of those side effects and controlling them, right? Not having inadvertent effects that you didn't want and that maybe you wouldn't have chosen or bad trade-offs, but being able to even control the side effects of industrial civilization.

And I talk about it as a control problem. I say what we need to do is—let's reframe this positively and in terms of human agency: We need to achieve climate control. We need to be in control of the climate, not just letting it happen to us inadvertently. In that essay, I dive into some of the technical possibilities and I sketch out not only where could we get abundant, reliable, cheap, clean energy from (which is obviously one key part of the equation), but also technologies for carbon removal. I ended up, after looking over a bunch of things, being most bullish on enhanced rock weathering. And then I also look at the technologies for albedo control—so that is sometimes called geoengineering or solar radiation management. And I think that what we need is a combination of all three of those things and solutions up and down that stack to achieve climate control.

[00:13:26] Beatrice: And one of the things that I thought—so you claim you're an anthropocentrist...

[00:13:33] Jason: Anthropocentrist, yes. Putting humans at the center.

[00:13:36] Beatrice: Exactly. Yeah. I do remember thinking about—and there was a bit of a note also—that even though you're an anthropocentrist, you think that could also be good for nature and for animals and things like that. Do you want to also expand a bit on that?

[00:13:53] Jason: Yeah. Again, what does it mean to be good for nature? Nature's not one thing. It doesn't have one set of goals or one ideal state. There are different animals and different species and plants—there's all sorts of organisms. And you might be able to have some notion of an overall more vibrant or less vibrant ecosystem, but I just don't really think there's a coherent thing in there.

And certainly, the point that I make in the essay is: If you look at what in practice are people's positions on different issues and you ask yourself, "What is the simplest model that explains all of their positions?" So I talk about different things that it might mean. People talk about the value of biodiversity, right? "Biodiversity is a value." Does that mean we should genetically engineer new species to have more diversity? If you say that to someone who says that they care about biodiversity, they will generally recoil against it; they'll have the worst reaction. "No, no, not like that. That's not what I meant," right? Okay. If climate change is bad, then why shouldn't we do solar radiation management to control albedo and control climate change? They're very much against that; it's extremely controversial. That's not what they want either. If a vibrant ecosystem is a great thing, should we terraform Mars to have a vibrant ecosystem, because right now it's mostly a bunch of rocks? No. That would also be a bad thing, right?

So you get all of this pushback. The only consistent position that explains all of these positions on all these issues is essentially: Anything humans do to intervene is bad. And anything that already exists—anything that's non-human, anything that existed before us or anything that animals do without us, etc.—is fine and is okay and is good, but anything that humans do is bad. That is not a positive value. That's just an anti-human stance. And so I think that is deeply—it's nihilistic, right?

When we think about the value of nature, we should think about things like—again, obviously nature has utilitarian value. It's like it's infrastructure for us. It provides services to us, and so we should think about maintaining but also maybe enhancing or upgrading that infrastructure. Nature has aesthetic value, obviously. It's lovely to be out on a clear, sunny day—fresh water, white clouds, trees, sunshine, big open spaces, mountains, oceans. All of these things just give us a fresh aesthetic sense, and so we should always have those places for the sake of human enjoyment. Even there, enjoying nature is an artificial human construct. National parks, for instance, are—you have to find the best spot for a park, you have to coordinate it off, you have to find trails through it, groom the trails, build stairways and railings, fence off dangerous areas, control wild animals. It feels like being out in nature, but it's a highly artificial experience to make it safe and pleasurable and enjoyable.

And then finally, I do think there's room for a psychological, moral sense of caring about animal welfare. And I don't have a very firm, strong position on how much we should care and how much it's optional, right? Is it a personal value or is it a universal value? Are we obligated to care? I don't really know. But I do think there is some way in which it makes sense to care about animals just because we psychologically resonate with them. I think it's a little bit of a luxury. It is a luxury of our current status of technology and wealth and infrastructure that we can devote resources to caring about animals. Certainly, tribal hunter-gatherers didn't really care and they would gladly use animals. Some tribes would run an entire herd of buffalo off of a cliff just so they would literally pile up corpses at the bottom, just because that was a convenient way to essentially hunt them. I think caring about animals is something that we now have the luxury of doing. We can pay extra for cage-free eggs or free-range chickens or whatever, or other technologies like in-ovo sexing. And that's great and we can do that if we want to, but again, at the end of the day, it's a human value—the human psychological value.

[00:18:32] Beatrice: Yeah, I think for me, it's obviously a luxury, and you also see that if you visit a poorer country—there's not so much room; you need to make sure you survive yourself. To me, I think I like it as a form of humanism, or like that should be the goal that we're striving for—that should be the goal that we're in general trying to care for all conscious beings, and yeah...

[00:18:58] Jason: Except the mosquitoes.

[00:19:00] Beatrice: That is true. I'm sorry.

[00:19:04] Jason: I support eradicating mosquitoes, at least the species that spread disease.

[00:19:09] Beatrice: Yeah. But has anyone done interesting research on what happens? Like, how does that affect the ecosystem and can we then be...

[00:19:17] Jason: I haven't dug into this, but what I've heard is that it probably wouldn't be an issue. Okay, yeah, we can just get rid of them.

[00:19:22] Beatrice: Yeah. Okay, good. Then we get rid of malaria while we're at it—it's perfect. Yeah, you do have this whole chapter basically called "Ode to Man." Yes. So it's basically like trying to uplift humans because there is this anti-human stance.

[00:19:41] Jason: Yeah. That one is a bit of a more brief and poetic chapter—more appealing to something than making an argument there. But yeah, it was just to try to remind people that, like, humans are actually pretty great and it's okay to be human. We should be proud of being human—what we can do and what our accomplishments are and have been. We should believe humanity to be worthy of being at the center of our own moral code. And we should have reverence for human achievements, including the achievements of industrial civilization.

[00:20:14] Beatrice: And do you think, when you've talked about this, what are there any particular arguments that actually convince people, make them agree with you on this?

[00:20:24] Jason: Oh, I don't know. I don't know if I've ever changed someone's mind on whether humans are good. That feels like a pretty deep conviction. But for people who are open to it—if the argument resonates with them—I wanted to just put it out there and kind of stake it out. Part of what you do in a manifesto—you can't really argue against every possible objection. So part of what you do in a manifesto is you just put an idea out there and you give it a clear statement. And that's a lot of what I was chartered to do.

[00:20:53] Beatrice: Yeah. It must be a very good practice, actually: "Where do I stand? What do I think about all these things?" Yeah. So another really important thing that I think goes hand-in-hand a bit with this discussion of "are humans bad or good?" and "is technology bad or good?" is this point of risk—risks that come with technologies and things. I did really appreciate that in the manifesto as well—that you talk about how being too dismissive of risk is not a good idea, basically, when you develop these new technologies and things like that. What did you think there?

[00:21:38] Jason: New technology very often—almost always—creates risks and problems. There are costs and risks to technological progress. And progress, when you look back on it historically and philosophically—making progress doesn't consist of ignoring the problems or the risks, or dismissing them or downplaying them, or just blithely plowing through and hoping it'll be all right, or trusting that it'll be all right. Progress actually comes about in significant part from acknowledging and embracing the risks and the problems, and then using our best efforts to solve them.

I have a chapter in the book I call Solutionism, because I want to get away from what can often be an unhelpful dichotomy of optimism versus pessimism. Optimism can become complacency where we're so optimistic that we don't even see the problems or admit the existence of the problems. On the other hand, pessimism isn't good either, because pessimism can be so defeatist—fatalistic—that it doesn't see the opportunity for solutions.

So we should be realistic about acknowledging the problems, but then also ambitious and agentic in seeking solutions. So I advocate not necessarily a descriptive optimism, but a prescriptive optimism. Descriptive optimism says the future is bright, the path will be easy, it's going to be smooth sailing, no major problems ahead. It's a rosy view, and it's just not always correct. Sometimes the reality is a much darker picture. But prescriptive optimism says, "Look, no matter what lies ahead, we are going to bring our best efforts. We believe we can solve it. We're going to step up to the challenge. We're not going to give up in a sort of fatalistic or defeatist way."

So that's the kind of optimism that I advocate. And so I tend to speak less about optimism and more about agency and ambition. And so I call it solutionism in the book. I give a few different examples of times when even technologists embraced risk and problems and stepped up to solve them.

I talk about—let me see if I remember a few of the examples I gave. One was risk from electrical products in the early days of the electronics industry and electrical industry, especially for fire, and the testing and standards and certification that were developed in order to deal with that. I talk about the smog that was created by automobiles and how we invented the catalytic converter in order to get rid of the pollution in our atmosphere.

I talk about a really interesting case of an agricultural crisis—a looming agricultural crisis that some scientists started to see in the late 1800s. There was a scientist who started to see that we were running out of fertilizer, and he predicted that if nothing changed, population was going to overtake food production capacity and we were going to have famine. And he was called a bit of an alarmist for this. But his response to this—or what he got out of this—was not, "Oh, we need to limit population" or "we need to all become poor or learn how to deal with less." What he did was he called on the chemists of the world to invent synthetic fertilizer. He wanted a way forward where we could continue growth—population growth and economic growth. And it turned out, by the way, that he was exactly right and synthetic fertilizer was exactly the way forward.

You look at him and you're like: Was he an optimist or a pessimist? He was very pessimistic about the problem looming, and a lot of people at the time wanted to just say, "Ah, you're worrying too much," and he wasn't—he was correct. But he was optimistic about a solution, and he was also correct about that, right? So that's the solutionist mindset that I advocate.

[00:25:37] Beatrice: Yeah. I think that the point that I really appreciated, that I think is greatly underappreciated in general, is that safety is not like a natural feature. And so I think that really ties together the point of, "Actually, people are good and humanism is good" because we can invent safety. That's something that we can do.

[00:25:57] Jason: Safety is an achievement, right? Life is inherently risky, both because nature is inherently risky—and for millennia we faced risk from fire, flood, famine, disease, you name it. Technology is also inherently risky. And again, every technology brings new problems. And so we need to achieve safety and security and resilience against both the hazards of nature and the hazards of technology.

[00:26:26] Beatrice: It's very easy to fall into—if we go back to this sort of pessimism/optimism thing—pessimism often sounds like you're smarter, or at least that's how people perceive it for some reason.

[00:26:44] Jason: I wrote an essay a long time ago called "Why Pessimism Sounds Smart."

[00:26:46] Beatrice: Yeah.

[00:26:47] Jason: And the thing that I realized was: A lot of times you want to figure out what's going to happen in the future and you try to project. What do you project? You project current trends. And if you want to make a smart, sober, wise, rational analysis or prediction, you don't assume any breakthroughs. You don't assume any wild new things coming out of left field, right? You just say, "What would happen on current trends?"

And current trends always look bad in the long term because—if you understand especially the S-curve of technologies—all technologies plateau at some point and they stop giving us more gains, whereas maybe population and demand and everything just keep growing. But you cannot forecast infinite progress if you just extrapolate current technologies. So you always run into something where you say, "This is only going to last so far; here's problem X, Y, and Z that we don't have any solutions to." So once we hit those, absent some totally random thing coming out of left field that I'm not going to include in my analysis, we're not going to solve those problems.

The thing is, if you look at the course of history and how things have actually run, what happens is we do solve those problems. We do create new breakthroughs. There are always things that couldn't have been accounted for in the forecasts. So in some sense, to be an optimist, you have to believe that some solution is going to come along that you can't name, and you don't know what it is or where it's going to come from, but somehow we're going to solve this problem. And that just sounds almost irresponsible, right? It is. And so pessimism has this way of sounding smart and wise and sober and rational, but it's actually just wrong. Like on a deeper level—or at some sort of meta-rational level—optimism actually is what is warranted from the history of humanity.

[00:28:44] Beatrice: Yeah. The pessimist can use current data, whereas the optimist has to trust human ingenuity.

[00:28:51] Jason: We have to extrapolate some really long-term thing. Look at this world GDP curve!

[00:28:56] Beatrice: Yeah, exactly that! Or you have to like, "Oh, maybe this and maybe that." And so, yeah, it's not as clear, obviously.

[00:29:03] Jason: That's why the fundamental thing you have to see is you have to see humanity as a problem-solving animal. So I have a chapter in the book titled "The Problem-Solving Animal," where I try to go into why is it—I first show that history shows that we can overcome problems, even ones that seem totally insoluble or intractable until we finally find the solution.

And I tried to get into some kind of deeper reason: Why is it? And so I came up with kind of a two-part answer to this.

The first part is that the solution space within reality—or the possibility space within reality—is combinatorially vast. So the number of small molecules that are possible just from recombining atoms in non-polymer small numbers has been estimated at something like 1060. It's an astronomical number. The number of proteins that are possible, or just like sequences of amino acids or genes, is even faster. To call that an astronomical number is not even fair, right? It's way bigger than anything astronomical. The number of whole genomes—and therefore of organisms that are possible—is just... you're looking at a six-digit exponent, right? $10$ to a six-digit number. The number of computer programs that are under some reasonable limit of code, the number of codes of law or morality that we could have, the number of philosophies that we could have—it's all just vastly... you can't even fathom the combinatorial space that's out there.

And so that means there's so many things that we haven't tried yet. And that means there's just gotta be much better things out there than where we are now, and there's gotta be solutions to problems that come up. There isn't a law of physics that says that the solutions absolutely must exist, but when there's so many things to try, I think that's part of where the solutions come from.

And then the second part of the answer is: The problem with a combinatorially vast solution space, of course, is it takes that much time to search it if you just are doing brute-force search. So the second part to the answer is: It turns out the solution space has structure, and that structure can be exploited for efficient search. And it turns out that intelligence is a tool or a faculty that allows us to efficiently search through the structure of combinatorially vast spaces. And so that is how we have been able to find these solutions.

[00:31:29] Beatrice: Yeah. And we're going to get into that a little bit more later also because you are arguing that we're approaching this Intelligence Age. So yeah, let's talk about that. But just on the combinatorial space or these things, you do also talk about—and because I think another thing that people might think when you say this is that progress is limited by natural resources—and I think you actually argue that's not necessarily the case. Do you want to expand?

[00:32:02] Jason: Yeah, look, this is an old debate and one that I think was settled long before I came around, but I do try to summarize the case. If you look at the history of natural resources, there's basically no natural resource that we have ever run out of in any kind of a catastrophic way.

Generally, one of two things happens. One is that we just keep finding more and more—this is what happened to oil, right? Some people thought oil was going to run out. The U.S. is now—oil production's at an all-time high; the U.S. is now a net oil exporter, right? People were talking about it running out in the 1800s, and here we are 150 years later.

The other thing that can happen is sometimes a natural resource really does run out because there actually just isn't much of it. But what happens is we tend to anticipate this. We see it coming and we switch to something else. We switched away from whale oil to kerosene, right? In general, in the 19th century—in the 1800s—we switched off of a lot of biological sources of material which were not scaling with the economy and with the population, and we switched onto much more abundant mineral sources. And someday in the future, when those mineral sources run out, we are going to be able to anticipate it far in advance and switch to something else. So we tend to make a smooth [transition]. And all of capitalism and all of the market economy gives incentives to make these predictions and to come up with the new technologies in advance.

The sort of deeper, more philosophical point is that in a deep and important sense, there are no "natural" resources. There's no such thing as a totally natural resource because, again, in an important sense, all resources are artificial. All resources, David Deutsch says, are the product of knowledge. They are the product of—the right sand that we make our chips out of, that was not useful until we had the technology to make the chips. Deutsch points out that at some point in the past, some ancient or prehistoric person must have died in the woods of exposure, literally lying on top of the fuel that could have saved their life if they'd known how to make a fire with it. Knowledge is what turns all of these things into resources.

[00:34:27] Beatrice: Yeah. Which brings me to another interesting sort of counterpoint, which is: I've also heard the argument made that good ideas are getting harder to find. Yeah. What do you have as a counterargument to that, maybe?

[00:34:39] Jason: Yeah. In a sense, it's true that good ideas get harder to find over time because as we pick off low-hanging fruit, the problems that remain are more difficult. Also, the more we expand the frontier of knowledge, the more detailed and specialized our knowledge gets. The longer it takes—the more individuals have to specialize in their field. The longer it takes to get to the frontier, the more years of exploration and of education that you need. The harder it is to cross-collaborate across different disciplines. So all of these things make it harder and harder to make progress over time.

But there is a counterforce: As ideas get harder to find, we get better at finding them. We have a larger population and a larger base of wealth and technology and infrastructure to put into R&D and to use to find new ideas. We have better scientific instruments. We have better scientific methods and statistical methods. We have better tools. We have better computational resources to gather and crunch the data. We have better ways of sharing ideas and recombining them over the internet.

And ultimately, this factor of us getting better at finding ideas has overcome and overwhelmed the factor of ideas getting harder to find. If that were not the case, then the pattern of history would've been one of deceleration. And the actual pattern of economic history is one of acceleration, right? If ideas getting hard to find were the only thing that mattered for progress, then we would've seen the fastest ever progress in the Stone Age because ideas were extremely easy to find in the Stone Age. They were everywhere. There was so much low-hanging fruit; it was practically sitting on the ground. All you had to do is pick it up, but we didn't have the people and the infrastructure to do so. That pattern of acceleration over history means that we've been able to get better at finding ideas faster than they've been getting harder to find. And I think we can continue to do that.

[00:36:44] Beatrice: This is the flywheel metaphor that you use, basically?

[00:36:47] Jason: Yeah. I talk about the—so when I try to explain this basic pattern of acceleration, and let me just define it a little more clearly. What we see when we look at the pattern of economic history is super-exponential growth, by which I mean growth greater than any exponential.

If you plot an exponential curve on a logarithmic y-axis, it turns into a straight line, right? Straight lines on a log axis mean exponential growth. If you plot world GDP on a logarithmic y-axis, it still bends upwards; it still looks like an exponential curve. That is super-exponential growth, where it's not, "Oh yes, we've just been growing at 2% per year for every year of human history." It's more: "No, in the Stone Age, maybe we grew at two basis points, like 0.02% per year. And maybe in the agricultural age we grew at closer to 0.2%, and now in the industrial age, we're growing at something like 2%."

So that's that acceleration. And if you ask where does that come from? My short answer is it comes from feedback loops. Progress begets progress. The more progress we make in science and technology and wealth and infrastructure, the more we have better tools to make progress with. And that is at all levels—from, obviously, more technology helps us do more science. Science helps us create better technology. A greater population and more wealth lets us plow more of that back into R&D, which then grows the economy even bigger.

As we notice that progress is happening and decide we want to make more of it, we create institutions of progress. We created the research university in the 19th century. We created the limited liability corporation. We created venture capital, right? And all of these mechanisms are things that we create. And so at every level—market size is another big one, right? Communication and transportation technology creates larger markets. And larger markets justify greater investment because now you have a bigger market to address with whatever you come up with. So all of these factors compound, and they actually increase the rate of growth over time. That's where we see this accelerating pattern come from.

[00:39:03] Beatrice: And maybe now is a good time to dive into this Intelligence Age point, because it looks like currently, where we are, we might really increase that part of this sort of flywheel—like our access to intelligence. What do you think that's going to mean for humanity?

[00:39:20] Jason: Yeah. I mean, AI is clearly a big thing. And I think the question for the last few years has been: How big of a thing is it? Within the tech industry, it's clearly the next big thing in computer and information technology, right? We had the PC, we had the internet, we had mobile, we had social web—not in that order—and AI is like the next big thing within computing.

But then there's a question of maybe it's bigger than that. Maybe it's the next big thing within the economy, like the next general-purpose technology—the steam engine or electricity or synthetic chemistry or something. There's another question: Okay, maybe it's even bigger than that. Maybe it is as big as the Industrial Revolution itself or the invention of agriculture.

If you look back at human history, there have been basically three, roughly speaking, modes of production and social organization. For tens of thousands or hundreds of thousands of years, we had a sort of tribal hunter-gatherer era in the Stone Age. For about 10 or 12,000 years, we had an agricultural era based on agriculture and large, settled societies. And then for the last 300 years-ish, we've had an industrial era based on energy and mechanization in nation-states. And it is possible that AI is a big enough change that it will be as big of a transformation. It will actually lead to a fourth age of humanity. It's been called the Intelligence Age, where it fundamentally changes the mode of production. Where in the agricultural age it was mostly agriculture and human labor doing crafts and manufactures, in the industrial age it was machines and energy. And in the Intelligence Age, it might be a fundamentally new mode of production based around intelligence as a utility, right?

[00:41:19] Beatrice: And, I mean, we talked about agency—human agency—as a really big part of this vision. How do you see that interact with the Intelligence Age? Like, how can we preserve meaningful human agency in the Intelligence Age? What do you think is like the best-case scenario?

[00:41:38] Jason: Yeah, I think we need to make sure that AI amplifies human agency and not cede our agency to the machines. And that begins with just being aware of the difference and making the choice to do so. It has to do with how each individual chooses to use these tools. It has to do with how the makers of these tools design them. It has to do with how our social institutions evolve.

I think this is a huge open area of research and thought, and a lot of people are thinking about it—the Cosmos Institute, for example, is thinking about these things. And I know the frontier AI labs are thinking about these questions. I've been thinking about it too. I don't think we have a total roadmap right now, but I think one of the most important things for humanity is to figure it out. But yeah, I would like to see a future where what AI does for us is allow us to learn any topic that we want, achieve any goal that we have in mind, realize any creative vision that comes to us, and generally just live better and healthier lives where we're actually achieving all of our visions and goals and dreams.

[00:42:55] Beatrice: Yeah. Is there anything else you think that as we maybe enter this age now—anything else you think we should keep in mind or that would be important for humanity to get right as we [move forward]?

[00:43:09] Jason: Yeah, and I think we do have to think about our relationship with AI. We have to think about our relationship with all technology. Whenever a new technology comes along, it changes the landscape of what's possible and how we live our lives. And we are going to change, and our lives are going to change in response to it. We don't really have a choice about that, but we do have a choice about how our lives change.

When the internet comes along into your life, you have a choice of what to do with it, right? You can have a choice of using it for self-enrichment, to live a better life, for education and engaging with things that you will actually feel good about—or you can slip into a kind of mindless consumption. Just let the algorithm dictate where your attention goes, and then at the end of the day, feel really unfulfilled and be like, "Oh, I didn't mean to doomscroll all day," right? And those temptations can be very powerful, but you do actually have a choice at the end of the day, and you can choose how to live your life and how to direct your attention.

And so I think there's a similar thing with AI, right? We can decide: Okay, if I want to know the answer to something, am I just going to ask the AI and trust whatever it tells me? When am I going to trust it? When am I going to dig in? When am I going to ask it to cite sources? When am I going to go check the sources? I've been thinking about this a lot as I interact with AI more and more to answer my questions. Some of the best interactions I've had have been when I am using AI as a guide to material that's out there. So maybe I'm interested in a book that I haven't read but I know something about. I might start off by asking AI to tell me, "Okay, tell me a little bit about this book. Tell me about the context, give me an overview. What are interesting parts I might want to read?" Maybe I'll decide I don't actually want to read the whole book, but I'll read certain chapters. And then I might come back and ask ChatGPT, "Hey, I have some follow-up questions now that I've read the book or parts of the book. And I also am curious about what happened after this book was published." Maybe it was published 50 years ago or 150 years ago, and what's happened since then? How did its predictions play out?

That kind of thing has been, or I've found that, really enriching. And so that's an example, I think, of using AI to just save you time and direct you to resources where you can actually go and learn—but again, not just completely trusting it, and not just relying on it or starting to become dependent on it and letting your own epistemic faculties atrophy.

[00:45:44] Beatrice: Yeah. It's a bit like food or something. You don't just eat all the candy.

[00:45:50] Jason: Yeah, exactly. We need to develop a healthy relationship with food as well.

[00:45:53] Beatrice: Yeah.

[00:45:54] Jason: And that's also a personal choice. It's also something that technology maybe can help with, right? We have drugs now that seem to be helping people with this.

[00:46:01] Beatrice: Yeah. I think that's such a funny one. It used to be that horse poop was the problem on the streets—that was dirt—and then we got cars. And now I think Ozempic and drugs like that—it's just so interesting. Like with obesity and how we're able to just... yeah, the problem-solving animal, I guess, is the point that you're making: these unexpected solutions, I would say.

[00:46:26] Jason: Yeah.

[00:46:28] Beatrice: But if we back it up, because I take it that the reason you wrote this manifesto is because you think we need more of a culture of progress, and that that's not what we're seeing in the sort of mainstream culture right now. Why do you think that is?

[00:46:46] Jason: Yeah, so this is—so part three of the book is "The Culture of Progress." And I talk about how there was a time when we had a big, ambitious vision for the future. Certainly in the sort of pre-World War I era, people were very optimistic about technology and science and progress in the future, maybe even naively optimistic. I would say even up through the 1960s, at least, we were dreaming of things like moon bases and flying cars. And ever since then—the last 50 years or so, or even a little longer—we've really soured on progress, at least in Western culture. We are much more fearful, skeptical, than in the past. People are skeptical even of the very concept of progress, and some people will never use it without putting it in scare quotes.

And so I think we need to restore a belief in progress—not a naively optimistic belief. We need to learn the lessons of the 20th century. Technology and industry and science brought about a lot of horrible things in the 20th century. They brought about more destructive war, pollution, hazards from radiation and chemicals and drugs that weren't adequately tested, and automobile accidents and plane accidents. And there were lots of problems we had to solve. But we actually have solved a lot of those and we're on the way to solving many more.

And I think, in a sense, we intellectually or philosophically threw the baby out with the bathwater in rejecting the very concept of progress itself, just because there were certain problems that we hadn't solved yet. And so I think we need to move forward with a—not go back to the naive optimism of the Victorian era, but move forward to the kind of 21st-century synthesis where we acknowledge the problems but we feel that we have agency and ambition to solve them. And we have some vision of a world that is better than today, right? An optimistic vision of the future that's not just "avoid disaster"—it's not just "stop climate change and prevent pandemics"—but it's actually: Let's build. Let's build something new and better than we've ever had, certainly better than the world today.

[00:48:55] Beatrice: Yeah. Are you seeing any traction for it? Are you seeing any signs that we're able to rebuild this belief a bit more?

[00:49:05] Jason: It's very early for the progress movement, but it's got a lot of momentum. It's growing. The movement's really only about, I would say, six years old or something. The term "progress studies" was coined in 2019, and we held our first progress conference just last year in 2024. Yeah, I think we're really seeing momentum build. There was just this book out earlier this year about abundance by Ezra Klein and Derek Thompson, which popularized that idea that's now caught hold in DC, at least among sort of the Democrats and on the left. I think the right has their own view of progress and American dynamism and re-industrializing. And so there's, politically, suddenly ideas of progress and abundance are winning ideas on both sides of the aisle, which is encouraging. Yeah. And more broadly, I think we're seeing the ideas spread within the scientific community, the engineering community, within Silicon Valley. So yeah, I'm optimistic about the growth of this movement.

[00:50:10] Beatrice: And if you think forward, what do you want to do? What do you think we should be doing to grow this movement?

[00:50:19] Jason: Yeah, sure. We need more and more writers who are laying the intellectual foundation for this movement. We need more people writing blogs and writing books. That's why my organization, The Roots of Progress Institute, has a fellowship program to support progress writers. We need to continue building community at events like the progress conference and Vision Weekend. We need to all eventually get these ideas out into the broader culture through channels like education, media, and entertainment.

We are getting ready soon to announce a student outreach program—I won't spoil that thunder or steal that by talking about it now, but pay attention, or maybe by the time this podcast is out, we'll have announced it already.

[00:51:07] Beatrice: Follow Roots of Progress, yeah!

[00:51:10] Jason: I think we need more Hollywood biopics of scientists and inventors that actually dramatize the creative process and the process of discovery. I think we need more out of science fiction. We need more visions of a future that we actually want to build and are inspired to build, because we actually want to live in that future.

I think we need more journalists and commentators like Derek Thompson or Noah Smith, Ezra Klein, Matt Yglesias, Jerusalem Demsas, who actually are aware of these issues—John Burn-Murdoch at the Financial Times. And are covering what's going on and writing editorials from a progress lens. I think we need more technology media like Ashlee Vance's Core Memory or Asterisk Magazine or Works in Progress magazine or Asimov Press that are covering technology without hating technology. You don't have to be a cheerleader or an uncritical booster. Let's just start from the premise that, like, we're—it's tech media that's not anti-tech. Yeah. That's a growing field right now, and I think it is a market opportunity. So I'm encouraged to see that. So all of these things are how we will build a culture of progress.

[00:52:21] Beatrice: Yeah. I'm actually a bit curious to dive into the stories point that you mentioned a bit. And thank you for writing—you wrote this post that I think I've shared a bunch on, was it "Seven Ways Sci-Fi..."?

[00:52:34] Jason: Yeah, something like "How Sci-Fi Can Have Conflict and Intrigue" exactly, and interesting stories without having to be a dystopia. So when I tell people, "Oh, we need more sci-fi that shows the world we want to live in," I get this reaction a lot of the time, which is: "Oh, but you can't do that because sci-fi has to have conflict," right? And there's no reason that conflict has to mean doomerism or dystopia.

So I outlined a bunch of different ways that sci-fi can be interesting without having those things. And it's things like: You can tell a man-versus-nature story like The Martian, right? You can have a story where the heroes are the builders and the villains are the people who want to stop them or tear them down. You can have a story where the heroes want to use the technology for good and the villains want to steal it and use it for evil, right? There are lots of different ways to do it. And yeah, I think I had seven or eight different ideas in there.

[00:53:28] Beatrice: Yeah. Yeah. I thought it was great, actually. Yeah. I always—my pet peeve is I think also just a good love story.

[00:53:36] Jason: Sure! That said, yeah, another thing you can do is you can tell an old story and just set it in a futuristic context, right? So a love story is a great example. The Quantum Thief is basically just like a detective story, but told on Mars with nanotech, right? That's true. Yeah. And there's different variations of this.

[00:53:54] Beatrice: Yeah, I think those classic stories just with another backdrop. Because now oftentimes it's like a very dystopian backdrop if it's at all a futuristic movie or anything. So if we revisit this conversation 20 years from now, what do you hope will have happened?

[00:54:16] Jason: I hope within 20 years the idea of progress will have gone mainstream. Ten or 20 years is about how long it takes for an intellectual movement to go from the first few weirdos writing blogs and books to where it breaks out into the mainstream media and mainstream education.

One thing I didn't mention earlier that I would love to see is more "progress" as a subject in the curriculums of our schools. I think in high school and university, students should be learning about the history of progress and the idea of progress and debating the philosophy of it. I think students graduate today without really having any understanding of or appreciation for industrial civilization, right? For the system that created and maintains our standard of living. Where did it come from? Why did we put it in place? What problems did it solve? What was life like before? What did it take to create it and what does it take to keep it going?

Charles Mann, author who wrote 1491 and The Wizard and the Prophet, has a series out in the New Atlantis called "How the System Works," where he is just going over the key things that everybody ought to know. So I'd love to see that series turned into like a course at the high school or college level. Yeah, that's one example.

And then I would like to see all of those other cultural things that we just talked about. I would like to see regulatory reform to try to roll back the vetocracy and some of the overburden of the regulatory state that's really slowing down—being in the way of our ability to build pretty much anything right now. I would like to see more experimentation and reform in the institutions of science and research. I think we have a bit of a monoculture in science funding right now, where the vast majority of funding dollars are funneled through a small number of large, centralized, bureaucratic federal agencies. They tend to give these small project-based grants to university-based principal investigators with small labs to do this sort of incremental project work, and then you have to write it all up in journals in a certain format to get credit and build your h-index and so forth.

All of those things have a good rationale and maybe even a place, but we've become too focused and centralized on that one narrow model of how to organize and fund science. And we just need more diversification, more experimentation with different kinds of models—more experimentation with Focused Research Organizations (FROs), experimentation with ARPA-style models (Speculative Technologies is working on that), more experimentation with different ways of publishing. Not everything should be a journal paper. Especially—the journal paper is an artifact of... I mean, they were invented in the pre-electronic communications era. In some sense, it came out of the Republic of Letters in the Royal Society of the 1600s. And they've evolved somewhat since then, but they haven't evolved as much as they ought to in the age of the internet and Jupyter notebooks and so forth. So there's much better ways that we should be publishing, and I think we should be experimenting with this more—the way, for instance, that like Arcadia Science and Astera Institute are doing.

[00:57:33] Beatrice: Yeah. Do you have any—if there was one deregulation or if you want to do like a new regulation that would be the most leverage? Do you have any special one?

[00:57:45] Jason: It'd be hard to pick just one. NEPA reform—like permitting reform—is a big one that people talk about, rolling back the burden of environmental impact statements. I would say, certainly in the U.S. and UK, nuclear reform. There was just a big list of recommendations published for Britain's nuclear regulation program that would streamline those things and allow them to build again. And you could come up with a similar list for the U.S. Very recently, there was an executive order to repeal the ban on supersonic flight. I think that was great, and so that needs to be implemented—we need to change it from a speed limit to a noise limit so that we can actually fly supersonic again over land as long as we're not bothering people with the sonic boom, which is totally technologically possible. Those are a few off the top of my head. I do have an essay in the series called "The Progress Agenda," where I lay out all of these things, so you can go there for more.

[00:58:45] Beatrice: Perfect. Is there anything—before I'm going to ask you one last question on your existential vision—is there anything you think we haven't covered that you want to say about the manifesto?

[00:58:56] Jason: No, this is pretty wide-ranging. Yeah.

[00:58:57] Beatrice: I think we covered a lot of it, but I still recommend people also read it. It was a great read and you'll get a lot more of the detail. But yeah, then let's end on the note of: What's your existential hope vision for the future?

[00:59:11] Jason: Yeah. It's the expansion of human agency. Yeah, it is—it's curing aging and disease so that we have more agency and control over our own lives and deaths. It is expanding artificial intelligence and giving everybody an infinitely wise and patient tutor and coach and assistant, allowing people to start entire new businesses with virtual teams. Realize any creative vision that they have—if they've got a great idea for a song or a movie or anything that they want, they can just make it without a whole team and a budget.

It is, yeah, increasing the speed of transportation. Again, I want to zip around the world in those supersonic planes. I would love to see us develop a real space economy that has an actual economic engine behind it, right? Where we're creating economic value and people are willing to pay for rockets and space travel. And it's not just a pure sort of scientific research program, and it's certainly not a national glory program the way it kind of was in the 1960s, but that we actually develop the economic engine of space. Those are just a few parts of it, but it's really—it's everything that Foresight is envisioning and working on.

[01:00:28] Beatrice: We are covering a lot of it. Yeah, actually. Thank you so much, Jason. Really appreciated this conversation. We're at Vision Weekend right now—Foresight Institute's Vision Weekend—so I think we're going to go out and try to listen to some of those talks about also just a bit of what we were talking about. But yeah, thank you so much.

[01:00:45] Jason: Thank you so much for having me. Great conversation.

Read

RECOMMENDED READING

Books and essays

Organizations

Media & publications

  • Works in Progress: Magazine focusing on the history and future of transformative ideas.
  • Asimov Press: Publication centered on biotech and the future of life sciences.
  • Asterisk Magazine: Quarterly journal exploring global trends and progress.
  • Big Think: Magazine that published the limited-edition "teaser" of Jason's manifesto.
  • Core Memory: Media project by Ashlee Vance (biographer of Elon Musk) covering technology history.

Key figures to follow

David Deutsch (physicist and author of The Beginning of Infinity, referenced regarding the nature of resources)