Podcasts

Jason Crawford | Progress: An Ever-Evolving Journey

about the episode

Jason Crawford is the founder of The Roots of Progress, where he writes and speaks about the history of technology and the philosophy of progress. Previously, he spent 18 years as a software engineer, engineering manager, and startup founder.

Session Summary: Jason envisages a future marked by dynamic, continuous progress, encapsulated in the concept of protopia. This vision diverges from a traditional notion of a utopia, and instead embraces a reality of constant, incremental improvement. In Jason's view, progress is a journey, not a destination. It's a series of small, significant steps that, over time, lead to profound transformations in our world.

Central to Jason's perspective is the transformative potential of AI, paralleling historical technological leaps like the steam engine and personal computing. He views AI as a catalyst for a new era in human history, one that could redefine societal structures by making high-quality services accessible to a broader demographic. This democratization of resources, akin to services becoming as affordable as a Netflix subscription, could bridge societal gaps. However, Jason emphasizes that this protopian future requires collective agency, responsibility, and a balanced understanding of our role in shaping it. He believes that progress accelerates over time, with each innovation building upon the last, thus speeding up future advancements.


Xhope scenario

Protopia II
Jason Crawford
Listen to the Existential Hope podcast
Learn about this science
...

About the Scientist

Jason Crawford is the founder of The Roots of Progress, where he writes and speaks about the history of technology and the philosophy of progress.

He has written for the MIT Technology Review and given interviews as a spokesman for the progress movement to BBC and Vox. Vox named him to their “Future Perfect 50” list alongside Will MacAskill, Jennifer Doudna, and Max Tegmark. He consults for Our World in Data, and advises fellows at the Foresight Institute. He has received grants for his work from the Mercatus Center and Open Philanthropy.

Previously, he spent 18 years as a software engineer, engineering manager, and startup founder. He has a B.S. in Computer Science from Carnegie Mellon University.

...

About the artpiece

Philipp Lenssen created this art piece with the help of generative AI. Philipp is from Germany and has been exploring technology and art for all his life. He developed sandbox universes Manyland and wrote a technology blog for 7 years. He's currently working on new daily pictures at Instagram.com/PhilippLenssen

Transcript

AD: Welcome to the Existential Hope podcast; we’re really happy to have Jason Crawford here – you've been leading the way in this new field of progress studies and actually implementing many of the findings and movement; you could almost call it that at this point. You have founded this wonderful organisation, Roots of Progress, with this mission to really develop and implement a new philosophy of progress. And you've written extensively about that, I hope to dig into that a little bit. today. I've also been on for an AMA with you guys, which is really fun. And you've done some really amazing writing also for works in progress, on how progress is not linear. So there's a ton of follow-up material that people can dig into, and perhaps after the interview, we can touch on a few of them in a bit more depth. And personally, it was really fun to meet you a little bit more in person at our Vision Weekend. We're hoping and are excited to welcome you again this year. So it's really fun stuff, including now a previous fellowship, that I'm dying to also discuss a little bit more, but maybe you can lay the land a little bit in terms of your three-minute history, your history, and how you got into the work that you do right now and what the issue is that you do every day.

JC: Yeah, sure. So I do two things. One is that I write about the history of technology and the philosophy of progress. I write about that in my blog, The Roots of Progress, on Twitter, where I spend just a little bit too much time. And then I am also working on a book about the history of industrial civilization. So, we can go into more detail on all that stuff. And then the other thing that I do is that I helped to run a nonprofit, which is also called the Roots of Progress. It started as a blog, we turned it into a nonprofit. And as you hinted at, the number one thing that we're doing with that non-organisation right now is a programme to support other progress writers and speakers and progress intellectuals. And I'm happy to go into detail on what's going on there. I used to be in tech myself, and my degree is in computer science, and that's my background. But long story short, I got into studying the history and nature of progress as an intellectual side project. And it took over my life. And a few years ago, I decided to go full-time on it. And so I made the midlife career shift from being a software engineering manager and tech startup founder to now being a researcher and writer, and a nonprofit leader. So here I am.

AD: An extremely interesting kind of approach to the whole thing because you're not only coming from the theoretical side, but you really like what happened in the belly of the beast and seeing kind of progress or like the kind of explanation of challenges and work life before you delve into that.

JC: I like to think that I come to the study of progress with an engineer's mindset. And so I find that when I'm looking into the history of various technologies, I am more interested than the average researcher or writer in understanding the technical details – like, how did this actually work? Why was this a difficult problem? One of the one of my favourite questions that I like to ask is like, why couldn't we have used some simpler solution? You can walk into a factory, and you can marvel at the amazing complexity of all these machines around you doing all these things, right? But then the engineer in me says, couldn't this have been made simpler? Why do we need all of this stuff? It's funny right now I'm in the middle of researching the history of agriculture. And so one of the things you learn in this is that one of the biggest central problems in the history of agriculture is soil fertility: that soil loses its fertility over time. And so one of the things that people did to combat this, even in the ancient world, is they would plant legumes, which are a special kind of crop that, unlike every other type of crop, actually enriches the soil rather than depleting its fertility. And so you wonder like, "Why didn't people just plant legumes all the time"? That sounds like the perfect crop, right? Like, why couldn't we have gotten away with that as a solution? Why do we have an enormous industry devoted to creating synthetic fertilisers today? Why don't we just eat all the legumes, right? And there's an answer to that question, but these are the types of things that I like to dig into, to really understand our world, to look around and understand our world as not just like an arbitrary invention like that we came up with because we thought it was cool, but actually as a solution to a problem as like an answer to human fundamental human needs and desires. And and how we solve those in the face of nature's constraints and, ultimately, the laws of physics. And so I see the modern world as being produced a lot by the fundamental nature of humans, and then the fundamental nature of reality, and those two things coming together. So yeah.

AD: Yeah, civilization –it's like a problem-solving machine, and then you can evaluate whether or not we're doing well on solving these problems. So, what's the answer to the legumes question?

JC: I haven't totally figured it out yet. I think part of the answer is that people don't want to eat legumes all the time. I think actually, another part of the answer is you don't get the yield is not as high. So you actually get higher yields in terms of mass or calories per acre or whatever from other places. So at a minimum, you want to alternate you want to do like a crop rotation, right? And if you can get fertility from somewhere else, then you also want to, and then you also want to do that, in fact, somebody was telling me so – so, thinking about more of the way Foresight thinks about things in terms of the future, one of the really interesting things that we might be able to do someday is genetically engineer other crops to do what legumes do. So it turns out legumes can create their own fertiliser, or at least their own nitrogen, which is the biggest typical ingredient fertiliser; they can make their own fixed nitrogen because it's actually not the crops doing it; it's bacteria. So bacteria are the only organisms that can fix nitrogen out of the atmosphere and then get you the nitrogen, the nitrates and stuff that you need. As a plant. So, legumes, it turns out, just live in symbiosis with bacteria that do this – they have these little nodules on their roots, the bacteria live there, and then they do, okay. Well, if they're in symbiosis, the bacteria must be getting paid somehow, right? What are they getting out of the deal? And it turns out, they get some of the energy from the plant. So some of the photosynthesis energy. And so because of that, the vigour of the plant is actually reduced, because some of it has to go to some of its photosynthesis is not going to its own growth, it's going to feed the bacteria. So if you did genetically engineer corn or something, to have these to be self-fertilising in this way, you would, there'd be a trade-off; it doesn't come for free, and the corn would be reduced in yield at least somewhat.

AD: Alright, more genetic modification on the horizon. Wonderful.

JC: But then you cool, right, like synthetic fertilisers, expensive fertiliser runoff has issues. Like, it'd be cool if we had that option.

AD: Yeah, certainly, that would be a really nice one. You have also written works on purpose pieces on the fact that progress is in general. Can you give like a few highlights of that? Because I think that is a really nice, like fast track into your worldview?

JC: Yeah. So a lot of this is around the question of what is the relationship between science and invention. Like at a very high level, we tend figure out stuff via science. And then, we apply it to inventions. They even call it applied science, right? The notion of applications or translation is right there; all those terms seem to imply that first, we figure out some science, and then an invention is an application of science. And there is some truth to that. And if you're painting with a very broad brushstroke, like that's true. But as soon as you start looking at the details of how things actually happen, you start to realise that the model is simplistic and very quickly breaks down. Very often, we make new inventions based on science that like we don't totally understand how they work yet. They're often based on like, often the science comes along later to totally explain how things work. So the classic example of this is the steam engine versus thermodynamics. The steam engine isn't in the 1700s. And thermodynamics doesn't come along until the 1800s. And in fact, thermodynamics was only to come later; it was literally inspired by the steam engines; the point of thermodynamics was to study these engines and to learn better how they work. So thermodynamics was great for optimising engines. But it didn't. It wasn't what we needed to invent the steam engine. So what is the real relationship between these things?

I think, ultimately, there is there is a really crucial way in which invention depends on science, but the relationship is nuanced. It's definitely not linear. There's not some linear flow, where everything begins in pure science, and then goes through some pipeline to invention, right? It is much more reciprocal. So the influence goes both ways. Science discovers some concepts, like semiconductors. And then people say, "Ooh, semiconductors are interesting". Let's tinker around with semiconductors and see if we can figure out right, and then maybe they're like, they're like, maybe we could build a semiconductor amplifier, right? A solid-state amplifier would be really cool. Can we do that? I don't know. Let's figure it out. Right? And then as they're tinkering, but the science that we had, and of course, I'm talking about the transistor, right? The science that we had when we began looking at the transistor was not sufficient to actually build the transistor, like the researchers at Bell Labs, who invented the transistor like they were going along. And like the theory that existed at the time, was not sufficient to explain all of the experiments that they were doing. They were trying to make a transistor like it wasn't coming up the way the theory said it should; they actually had to go back to the drawing board back to the blackboard and revise the theory and come up with new theory. But you know, the theory and the science are guiding the tinkering. Certainly, right. And in fact, you may even shuttle rapidly back and forth between science and invention in that way. And then certainly after an invention comes along, science and theory are often used to optimise it to improve the efficiency to improve the reliability, and so forth.

So science is there like before, during and after invention, but at the same time, invention is always getting ahead of science. It's always racing ahead of science. And I think there's actually a very simple way to understand why this will always be the case. When Which is like, what? Why does engineering get ahead of science for a simple reason? Because it can, because you can go right out to the edges of knowledge, right at the boundary of what we barely understand. And you can play around there. And you can tinker and you can find really cool stuff. And in fact, that's where the biggest economic opportunities are, right? Because, that's where maybe you could do something cool. But it's not really straightforward. When creating something new becomes so straight-forward, that it's just like an application of known principles like literally an application. We don't call it an invention anymore. That's just engineering. That's if you want to make a digital circuit that implements a certain binary function. There's no question today about how you go about that. Like, nobody would call it an invention if you made a digital circuit to compute some new binary function that had never been computed before. That's just engineering. The same way, if I wanted to make a Twitter clone, in 2023, it doesn't require any invention. This is software engineering. And so any software engineer will know how to create that programme. It's very straightforward. What at that point, like, there's not so much there's only limited economic value in creating those things, right? There's much more economic value in doing something like creating fusion, which is at a, you know, which is still kind of a science experiment, right? So I think experimentation and invention are always going; there's always gonna be a lot of super valuable stuff to do at the very boundaries: the fuzzy boundaries of our knowledge. And so it's always going to be the case that we're going to be coming up with inventions that science can't totally explain.

AD: Yeah, it's so funny, it is mostly speaking in analogy. If you really learn how to patch things together, and made me that can also give us some hope for AI safety in the sense that currently, we're like, really engineering like almost at the kind of really, yeah, really at the maximum speed in terms of figuring out what kind of models could work. And then for interpretability, that's perhaps like more the scientific part of explaining how these models make these make these specific decisions. And maybe that is a hopeful image that interpretability eventually, you can catch up. If we can map very roughly the categories that you explained on AI, you've also looked into AI a little bit. Do you want to give us kind of the lay of the land of that field right now? Do you have any hot takes?

JC: Gosh, everybody's talking about AI right now. And for good reason, right? It's pretty freaking amazing the point that we're at. So the question that I always ask myself is, what can I add to the noise? Right? A lot of people are excited about AI for a really good reason. So I asked this, and I posed this question: I think you could; one way you could think about AI is maybe it is the next big thing in computing. We had what we had the personal computer in the catalogue in the 70s, right? Maybe the 80s, depending on how you count, then we have the web in the 90s, and then we have mobile in the early 2000s. And, is this the next big thing? Right? We've been wondering for a while. What's the next big thing? Another way you could wonder about it is: is this the next big like general purpose technology? Should we think about this as analogous with the steam engine, the internal combustion engine or electricity or something like that, right? If you want to get really ambitious, you could ask whether this actually is the beginning of a completely new mode of production. So like we had the hunter-gatherer era, the agricultural era, the industrial era, are we, is this, like, the key thing like the steam engine kicked off the industrial revolution? In a certain sense? Is AI going to kick off some new era of economic history of human history?

Some people would even go further than that. And say, it's like, it's a new species. There were animals, then there were humans, and now there's AI? I don't know. So it's just like, how far along that spectrum do you want to imagine this is? Do you want to imagine that this is, but no matter what, it's a big deal, right? And I think a bunch of people have written eloquently about why it could be such a big deal. I like the opening paragraphs of Marc Andreessen's essay about AI very clearly laid out a bunch of huge possibilities for what this thing could be. You just imagine that everybody gets – think of all the services that you can buy now that are very expensive. Have you ever hired a lawyer? Even for the simplest thing, like getting out of a traffic ticket, right? Have you ever had to pay somebody to do your taxes, right? Not like TurboTax,  not h&r block, but like, actually get an accountant. All of these things are super expensive, right? Doctors, we get like a tiny; you get a little 15-20 minute window with a doctor because they're so expensive, right? And so just think of all these things. And now imagine that now imagine that all of that is just taken down by orders of magnitude in price. So, almost anyone can afford a great lawyer, a great financial advisor, a great accountant, a great doctor, or a great tutor. And so you think of so many things in the history of progress are taking something that was once available only to the rich and bringing the price down so that almost everybody can afford it, right. And so you think about all these things right now, these are the things you splurge on, if you're rich, you have lawyers and accountants and you have the best doctors and you have to do tutors, you hire tutors for your children, and maybe you hire tutors for yourself, if you want to learn something, and you hire a personal trainer, and all this stuff, right. And so imagine if all of those things went from costing 1000s or 10s of 1000s of dollars to now costing 10s of dollars per month, or something like a Netflix subscription. That would be amazing, that would open up all of this stuff to to a huge swath of people. And just like these things tend to be a bigger deal for the for poor, or the moderately well off than the middle class, and thanfor the rich, I think this would also be a bigger deal for people of average intelligence than it is for the smartest people. Because the smartest people, if something's really important to them, maybe they could go out and figure it out on their own. Right? The smartest and best-educated people can go read up on some tax issue and figure it out for themselves if they really have to. But not everybody could do that, not realistically not within the bounds of time and energy and effort that they have.

So it'll be an even bigger boon to them. So yeah, I think this could be huge. But here's so here's maybe the biggest, like most high level, or one of the most high-level ways to be optimistic, or to think that the potential for AI is really huge, is to actually look at an economic growth theory and a major challenge that it points out to us. And that challenge is in a phrase, running out of people to push the frontiers of progress forward. Now, that might sound like a thing that we are that we're not going to do, like, why would we run out of people, but there's basically so there's basically two things that go into this. So the first thing that goes into this is research from economists like Chad Jones at Stanford, that show that in a nutshell, in order to keep economic growth going at an exponential rate, in order to keep growing the economy at whatever X percent per year, we need to also continue to grow the inputs to r&d, right. So, if you look at something like Moore's law, you get exponential progress out of Moore's law, an exponential number of whatever transistors you can fit on a chip. But also, but to keep Moore's Law going at that exponential pace, or near exponential pace, it has taken an exponentially growing investment into semiconductor R&D, if you just look at what des Intel and TSMC and everybody spend on r&d to keep this going like that has been growing exponentially, along with the exponential improvement in chips, right. You can see this across different areas of the economy, many different areas, and then you can see it the at the macro level. And so the conclusion is, this is by the way, I'm what I'm summarising essentially, is the famous kind of ideas getting harder to find paper, although some of the basic ideas go back to Chad Jones pointed out some of the stuff in the 90s in earlier papers, but basically – …

Creon: May I ask a question, Jason?

JC: Yeah, sure.

Creon: Thanks. Sorry for the interruption. I'm not arguing with Stanford economists about economics, a question that I do have, as, aren't there, isn't that partially mitigated in the case of Moore's Law and other things by the virtuous cycle where yes, they spend more on r&d and on the inputs for their advanced processes. But because of the advances in, in Moore's because of Moore's Law and other advances in technology, you get more bang for your buck. It's not like you're spending 10 times as much money using the same old methods and technologies you have now big compute million times faster computers, and Oculus has other stuff, which is a multiplicative factor on this investment. So even though it might still be growing exponentially, I suspect that the exponent is smaller than the output exponent.

JC: That might be true. I don't remember the details of this case well enough to have an authoritative answer for this. But I think basically, so first off, just empirically, like, if you look at the, if you just look at the dollars invested, and and you look at the output that you get, you just you find this basic relationship, I think, yeah, it's certainly true that as we come up with better tools for researchers to use, we should make them more productive. Part of that goes in, like you still have to buy the tools and buying the tools is higher capital cost as the tools get better. And and so you're still there's more investment in overall investment in R&D. But yeah, but just empirically, like this, basically, the relationship holds that in order to continue making exponential progress in various areas or in the economy as a whole. It takes exponential input into the r&d process. And yeah, some of that is in the form of like, better computers for the researchers to work with but that's still part of the overall input. And then the other thing, which I mean, so I have to get to this to explain the significance of the next point, which is that no matter how good how much of the process you can automate, as long as humans are like, a piece of the process, as long as humans form some part of the bottleneck on r&d, then we have to continue to increase the number of humans involved in R&D as well. Now, through the 20th century, we did this at quite an impressive pace. But we did it in part because of population growth, but also in part because of better education, and just increased employment in R&D. So a greater percentage of the human population became educated and then went into r&d jobs. And that factor, of course, can't continue forever. Like at a certain point, you've maxed out everyone in the world as a PhD researcher, and you've just maxed out the potential of, of that particular line of increasing researchers. And then of course, the other thing that's going on is that in 1968 or so, population, world population growth peaked, and population growth has been decelerating ever since. And, in fact, by some projections, population might plateau or even peak within the century, and possibly decline. So we might never get more than about 10 or 11 billion people at this by these projections. So you have this concern overall for economic growth, which is that if we want to keep economic growth going at an exponential pace, where are we gonna get the researchers from, at a certain point, because of the population just because of the population problem, like we're just going to run out of people to keep pushing the frontiers of progress forward. And if you think of progress, as basically looks like an expanding, the way I think about it is like there's an expanding sphere that is like the frontier of knowledge and technology. And each person can push forward a constant portion of that sphere. And so the bigger the sphere gets, the more people we need to continue, like pushing it bigger and bigger, right, that's just like a visual metaphor that you can use to roughly understand this, it might not be quantitatively correct, but yeah, directionally correct.

AD: It's interesting from Population Bomb, to now hacker worries about like depopulation. And then I think that I mean, he's have a long list and worries about my past, we shouldn't be over worrying about depopulation. There may be other factors, too. That could be a kick starting some of that again, but I think in a recent podcast, you mentioned it, like we may AI may be bottlenecked on humans, just because humans have more dexterity, dexterity. And so unless we get punched by robots very fast, it could actually be that they is doing most of the thinking and that the humans are doing most of the execution. And that will be another bottleneck on humans, if that actually pans out to be true.

JC: Yeah, although of course, if that does turn out to be true, then you can just focus all the intelligence on resolving that bottleneck, right, and then move past it. But so just to put this all together into clear friction to tie it back to AI, like one way we could get around this problem is we could have aI take over more and more of the research, right. Thus, if we can exponentially remove humans as a bottleneck in the process and ultimately replace human researchers with AI researchers, then we might be able to continue making progress without without having to continue to grow the human population. If that turns out to be a difficult thing to do, there might be other reasons to grow the human population. But if we don't do that, or can't do that, for some reason, maybe AI can take over. And so Chad Jones says there's, there's almost an, there's almost like an escape velocity here, which is if we can, if the humans can push for the technology to the point where we can hand it off to AI in the relay race, then great, then technology, then progress can just keep going. But if we don't quite hit that point, then we suffer a collapse where it's now there are fewer humans, and now we can just we just push things slower, and progress slows down and we get stagnation. So it might need it might be that we need to hand off the research to the AI in order to avoid stagnation in the long term.

AD: Yes, we may have to hit some kind of escape velocity there to even get there. But how does it all tie in with the book that you're now writing give us the entire new large picture, because I think what is still love on the Roots of Progress website, you say something like, in order to make progress, we must believe that is possible and desirable. And in the 19th century, people really believed in the power of technology and industry to better humanity. But in the 20th century, it's a little bit more perhaps a sceptic, and people just trust it a little bit more. And so you really are in for this new way forward. And I'm assuming that you're really laying on kind of a more grand vision of this in the book. Is that correct? So yeah, give us like a just a brief tip, and so people can get excited.

JC: Yeah, sure. Let me talk about the book first. And we can go more into the big picture and sort of the cultural factors if you want. The book is going to be a both the history of industrial civilization and the lessons of that history. So it asks, How did we get here? What were the discoveries and inventions that created the modern world and gave us our standard of living? thing that we now enjoy, which is, of course, and completely unprecedented in human history. And then what are the lessons of that history? Four key questions about progress such as is progress. Good. Can it continue? And ultimately, what should we as a society do about it? So that's the sort of elevator pitch for the book. And I'm planning to cover if not absolutely all, it won't be encyclopaedic, but I'm planning to cover enough of a swath a broad swath of the major kind of areas of the industrial economy, that you walk away from the book feeling Wow, we have going going back to that problem solution orientation that I mentioned in the beginning, wow, we team humanity, we have solved a lot of problems. We have come a long way. And in fact, many of those problems have been solved so thoroughly, and with such finesse that we've even forgotten that the problems themselves ever existed less and the solution has become invisible to us, right? People in every major city, not much more than 100 years ago, were wading through the muck of horse manure, right? Because it took hundreds of 1000s of horses to run a city like literally, in fact, man, I just found out that in, I think it was 19th century Paris, they grew an enormous amount of like salad crops, just fueled by the manure from the horses that ran the city's transportation system, right fertile fertilised with that manure. So, epidemics of diseases like smallpox, cholera and polio, again, just to think that we don't in the in most of the world, we just don't think about this anymore. We don't have to, because it's been that problem has been solved and confined to the dustbin of history. And we are so lucky and so privileged to live in a time when we don't have to even know that those problems ever existed if we if we don't want to. But we should know that those problems existed once, we should have a little bit of gratitude and awe and wonder at this amazing world that we've created. And to do that, I think we need to know how far we've come like where we came from. And ultimately, I think this tells us something about who we are; like the vision of humanity that I want to convey through this book is that we are problem solvers; we are problem-solving animals. And that over and over again, throughout human history, across a staggeringly wide range of problems, we have come up with pretty amazing solutions.

AD: Love it. This is a wonderful segue for me to hand it over to Beatrice because she'll be discussing some of the Existential Hope questions. And I think with that historic foundation laid, and I'm hoping that we can dig into also what you think the future may hold. So thanks for that. Handing over to Beatrice.

Beatrice Erkers (BE): Yeah, thank you so much. Yeah, so this is the Existential Hope podcast. So, I definitely want to talk more about the future. And yeah, you just shared that the vision that you want to convey is that we're problem solvers. But overall, do you think you're positive about the future? Do you think what do you think is going to happen?

JC: I think that what's going to happen is ultimately up to us. And I think even more important than positivity, or negativity is a sense of agency, a sense that it's up to us to decide and to do our best to make it a positive future. Yes, fundamentally, I do think it can be a very positive future. And I think that largely from the track record of past, and from, again, what it says to me about the nature of humans and who we are. But I think that's not automatic. And it's not inevitable. And we have to always remember that right? We have to always remember that the future is not guaranteed. And that success is not guaranteed, I think, to the best ways to fail at anything, or one to believe that success is impossible, or to believe that failure is impossible. Right? The way to succeed is always to understand that success is contingent, and it's contingent, ultimately, on your choice and effort. And so I think that's how we should see ourselves and our situation. And that means it's there's a huge responsibility on our shoulders, and especially on the shoulders of every technologist of every scientist and inventor and startup founder, and everybody who's involved in government policy, and in a more indirect way, in the educators who teach the next generation and the journalists and the author's who communicate important ideas, and so forth. It's on all of us to create a great future. But I do think fundamentally, that you know, that we can do it.

BE: Yeah, no, I definitely resonate with the sense of agency thing. I think that's one of those things that I feel like as really, people seem to have dropped that ball very much. But maybe then it's a good segue to ask you about the progress fellowship. What's that? Who should apply and by when should they apply?

JC: Yeah, sure. Let me answer the detailed questions. And then let me give you like a little bit of the bigger picture. So the detailed answer is that this programme is for basically anybody who wants to blog about progress and write about it for a general audience. It is it's for people who want to launch a blog, and it's also for people who already have one and just want to grow it or just to get more immersed in the progress community. And we've had people apply already who are already relatively established writers and people who are not yet established and really want to get going. So we'll help all those. The programme is going to be an eight-week intensive. So you're going to devote like 10 to 15 hours a week for eight weeks, you're going to write several pieces, and you're going to get feedback on them from sort of mentors and from your peer group, where you're gonna go through writing instruction, we have partnered with one of the top online writing courses, and we've worked with them to help adapt the course that they have to deeply researched, long-form writing. And it's also just going to involve some kind of immersion in progress studies and the thinking of the Progress community. And we're going to have sessions with a bunch of great advisors and mentors, just off the top of my head. This includes folks like Steven Pinker, Tyler Cowen, and Tamara Winter at Stripe Press, Virginia Postrel, Johann Norberg, who wrote a book on progress, and a bunch of other people – just check out the site for the full list. And so there'll be there tissue sessions and answer questions and NP, there's a resource. So yeah, I'm really excited about it. Let me put this in the bigger context of why we are doing this. And what is our strategy? And what's our theory of change? Allison pointed out that I've said that in order to make progress, we must believe that it is possible and desirable. There are many factors that drive progress. But I think a major one is our fundamental idea about progress itself. Do we think that continued progress is possible, versus the sort of idea of inevitable stagnation that we've run out of good ideas? And then do we think that future progress would be even desirable, right? Is progress actually even a good thing? Is it creating better lives for people? Is technological progress, actual human progress, right? And then I think if you see progress is both possible and desirable, then the more society sees that the more talent and resources are going to flow into actually creating progress. And the more people are fearful and sceptical of the very idea of progress, the more talent and resources are going to flow into the opposite into stopping progress or slowing it down, or binding it up in endless red tape. And so I think those are the fundamental progress can have cultural headwinds or can have cultural tailwinds. And so part of what we're trying to do is reduce the headwinds and increase the tariffs. And I think in order to do this, as I've said, as it's become my tagline, we need a new philosophy of progress for the 21st century, in the 19th century, and up until World War One, the world was very optimistic about progress, frankly, a little bit naively optimistic about how easy it would be to continue to make progress, they started, but people started to think that it was almost inevitable, that progress would continue, and that it would definitely be good. And then would end the progress. Technology would always be used for good things, and not in the context of bad or oppressive social systems and so forth. And that turned out to be false. And the 20th century, violently shattered the naive illusions of the people who thought that progress was just inevitable, and that moral progress would go along with technological progress hand in hand. And there were some harsh and very real lessons of the 20th century, there are there there can be some very bad side effects of progress. There are costs and risks to technological and industrial progress. And there are and it can be used in bad ways, right? technology in the hands of a Hitler or a Mao is not going to lead to human goodness and good outcomes for humanity. So, but I think that what happened to the 20th century was the those harsh lessons were partaking a little too deeply. And a certain view of progress came about that said that maybe actually progress was a mistake. And maybe we and maybe we should stop trying to continue to advance science, technology and industry, and maybe even rolled them back a bit, right, maybe we went a bit too far. And and so you got the sort of the radical environmentalist movement, the sort of romantic view, almost a back to nature of you, you got a lot of scepticism of the establishment and the system, and the sort of countercultural idea that we should, we should dismantle these big systems or fight against them. And everything should be small and local; smal liberal has its advantages, but it's not a way that we can do anything and everything. And we just got a lot of these sort of very fundamental forces, ultimately, that decided to set themselves up against progress. And they succeeded well enough to not completely but fortunately, but they succeeded well enough to I think, contribute to the relative slowdown in technological and economic progress in the last 50 years. And so I think we need to reverse this trend. I think we need a new way forward. Not Of course, I returned to the naive optimism of the pre-World War One era, but also not this sort of fatalism, and defeatism of the of the counterculture and the kind of late 20th century we need We need the right synthesis where we are thoughtful and careful about progress, but also still feel that sense of agency. And that sense of humanism, of human well-being as the ultimate standard of how we judge progress even as good. So that's the big picture to do that, to create that new philosophy of progress. And to advance it, I think we need a movement, we need an intellectual and sort of cultural movement for progress. And that's what I hope progress study is will become, among other things, and all such movements are based on a body of ideas, ultimately, I think expressed in the form of books not exclusively, but very importantly, in the form of books. And so we need thinkers and writers, people who are going to think and research and, and write and speak and write in many forms journalism, and blogging, and so forth. But ultimately, we need books as a really important form of this. And so one of the key strategies of the roots of progress as a as an organisation is to help to create the new generation of progress thinkers who are going to create that intellectual base for the movement. And the blogging programme, in particular is our first effort within that. And it is specifically aimed at the first step in becoming that type of person, which typically is launch a blog and grow it and build an audience. And so all right, that's working all the way from intellectual history of the last couple centuries down to why we launched a blog-building programme, but hopefully that connects the dots.

BE: Yeah, no, definitely. I am. I'm also happy. You mentioned Johan Norberg. He was an advisor for me that actually, in terms of needing guidance and these things, and he was an advisor for me when I wrote a paper about this sort of difficulties with liberalism and technological development, and that you do have these things always when you have tech development, there's so much good that comes with it. But it's always some sort of backlash also, or like challenges. And he was also pointing out a lot about oftentimes we have development from areas where like, we it's very unexpected, I think he mentioned like, online payment systems coming from like online porn, and these sorts of things, like often very unexpected, important knowledge comes from not the nicest places, maybe. Yeah, but so in terms of what like future we want. So one of the things that we're always asked is about this thing of a you catastrophic. So that's literally like the opposite of a catastrophic, like a positive event, an event where once it has happened, we're much better off. And so do you have any idea of what you think such a eucatastrophe would be?

JC: Going back to its original discussion about AI? That could certainly be it, right? Or some other major breakthrough? The literal logo of the Foresight Institute as a nanomachine. Right. So nanotechnology would be another one of those things, some breakthrough in genetic engineering, right. But you know, we can all make up these things. I think one of the questions maybe that you had listed was, "Do I have a better term than you've tested it"? And I think I thought about that, and I like Kevin Kelly's term Protopia. But one of the reasons that I like his term is because it's specifically set up as something different from a utopia. It's actually a difference in concept, right? Utopia is the idea that we reach some sort of static end state where everything's amazing and wonderful. And protopia is a much more incrementalist kind of actually, more importantly, it's a much more dynamic rather than static concept of the good future. It's not a notion that we're going to hit some kind of, yeah, find some static utopia, but the idea that actually, it just things are just gonna keep getting better and a little bit better and better, but over the long period of time, and I think that matches reality more than any kind of static vision, reality is dynamic, not static. Similarly, I think that reality tends to be incremental. And progress tends to be incremental, maybe punctuated, not completely smooth, but it tends to be more incremental and gradual then happening in big explosions. So I don't think that so I think that actually, maybe there's something like I would question the very concept of the catastrophe. I don't think the good things happened in big bangs. I think they actually come about more, more gradually and incrementally. And even when you're in the middle of them. It can it can seem like everything's taking a very long time.

BE: Yeah, I think the actual, like, example, that Toby Ord and Owen Cotton-Barratt had in the paper about Existential risk and Existential hope, where they chose the term utopia for this concept was the creation of life, which certainly was a gradual process, I would say, yeah, so I also very much agree with that. So in terms of this, also, going back to the fellowship, a little bit like, what do you think would be the most? Is there any specific breakthrough in the next five years? We'll be thinking more near term that would let you know that, okay, we're on track to this protopia.

JC: Or no, I think we're on track to it as long as we can. As long as we can continue the overall as about, say the overall theme of that maybe last few 100 years. But it's really the overall theme of like all of human history. If you look at all of human history, going back to the beginning of even behaviorally modern humans some 50 - 70 - 80,000 years ago, the big theme is that progress accelerates. So progress was extremely slow in the pre-agricultural period, but it did happen – you can see the hit.

In fact, I would say, I would argue that technological progress is maybe even older than Homosapiens, because you can see it in the fossil record or not the fossil record, but in the archaeological record of stone tools, going back almost 3 million years, stone tools gradually evolved over that time, the very first ones were quite crude, it was really literally just like you took a big rock and you hit it with another rock until a big hunk of it whacked off and you had a sharp edge. And then, in the sort of middle of the period, the rocks are starting to get a little more shaped, and they're now they have maybe a sharp edge all the way around, right. And then by the end of the Stone Age, you have a whole detailed toolkit with all of these very – where the rocks are, like very specially formed into precise shapes. And you have a diverse set of different tools that have like specific purposes, and you have arrowheads, and you have all these, and you have all this kind of stuff.

And so progress was going on. It's just that innovations came along every like, 1000 years or so. Right? Or maybe even not that fast. And then when the Middle Ages, you did not have a lot of progress happening, but it was faster than an invention every 1000 years, maybe you got a major invention every century or something, right, and somebody comes along and you get the plough, and you got the spinning machine. And these are the spinning wheel, not an automated spinning machine, of course. And you get these sort of basic things that increase productivity somewhat. It's just that again, they were coming along once a century, right. And then the invention started coming along once a decade, and maybe by now they're coming along every year, or maybe they will soon right.

And so I think if that and the reason for this is that progress begets progress, like progress compounds, the more progress we make, the more we have the ability to continue making progress faster. Every communication technology allows us to spread new ideas faster and to search them and to combine them. Every transportation technology opens up bigger markets, and leverages all of our R&D; every fundamental new manufacturing or energy technology opens up new types of things that we can create new types of objects and machines, and so forth. Right?

And so you get these general purpose technologies, these fundamental things, and they either just open up very broad new possibilities, or they just make the very process of advancement faster in itself. And so I see no reason that this will just continue. So, you know, protopia is –  this is the great thing about protopia is that it's actually already here, right? Utopia is something that feels like it's always in the future. And it's hard to imagine actually getting there. But protopia is actually is just if you zoom out enough, it's just a description of the human condition. We're already here. In fact, in a certain sense, we've always been here. And you can just take a minute to step back and nod and appreciate that fact.

BE: Yeah, I think that's a good way to almost start rounding off, I think, like really starting to revel in this success. I guess, one of the things that you have mentioned, I think, throughout this conversation is like the importance of intellectual development and learning and reading and yeah, developing our knowledge to create this progress movement. And so do you have any recommended reading? Or where should one start if they want to learn about progress studies?

JC: We actually put together a bit of a resource guide, when we launched the blog building programme. If you go to fellowship dot routes of progress.org, you'll see the announcement about that. And there's a link there it says resources. And that goes to a page that we set up with a bunch of links and great places to get started. So that would be my recommended reading, including a bunch of articles you can read. And also if you want to dive deeper top handful of books, that's perfect.

BE: You have a guide, that's perfect. And is there any one person that you think if we think of existential hope that inspires existential hope for you that ideally also that you think we should have on this podcast? So we can invite them?

JC: Oh, wow. There are a bunch of people who are really who are really pushing for the frontiers of progress. So it would be tough to name any one person who would stand out and again, they're most of them are pretty well known names, but happy to send you some podcast suggestions later.

BE: Perfect. So last question before I see we have some questions from the audience also that maybe we have time for what is the best advice you ever received?

JC: Wow, the best advice I ever received. I mentioned towards the beginning that I used to be in the tech industry and I was once a once or twice a startup founder and when you're a founder or you get you go When you get advice from a lot of people, and a lot of lots of people happen to give you advice, especially your investors, right? But I had one investor who just kept asking me, “What do you think you should do? And what is your end? What is your like? Bottom line? What is your gut judgement here?” And she was like the only investor who would do that, right? Everybody else would give you advice, and then just trust you to go away and figure it out. And maybe even if you ask them, they would tell you, yeah, you should go with your own judgement in the end. But I had someone who explicitly reminded me to do that. And so the best advice maybe is actually, somebody asked me who isn't an investor, that was Dustin Delgado. So thanks, Dustin. So maybe the best advice is actually meta advice, which is that it's great to get advice, but it's also easy to over rely on it. At the end of the day, every time that I went against what a sort of my like intuition was, if I had a strong one, in order to take advice from people who were smarter, and more experienced, and more accomplished than I was usually turned out to be wrong, because they just didn't know my situation as well as I did, right. And even the smartest people and most experienced people can be wrong sometimes. So yeah, that's the Met advice. Listen to all the advice, seek it out. But process it think about what it actually means. And at the end of the day, make up your own mind, even if that is against all the advice that you got.

BE: That's a very good point, collect the information, but think for yourself, I guess. So if we maybe we have time for one or two questions. I think – Lucas, do you want to go quickly?

Lucas: Yeah, sure. Jason, thanks for the talk that was chock full of really interesting stuff. Relating to your advice point, I have a number of friends who, just like me, just graduated from jobs and or exited startups. So what are some good problems you think motivated builders and organisers should work on if you were to just highlight a few?

JC: Well, you could, you could do much worse than to start with the Foresight Institute's like list of cause areas, right, like, right, there are some of the top biggest sort of opportunities for humanity. It's really tough, because there's so many different things. And what is actually going to be the best opportunity is not actually something that is going to be obvious to most people, right. So again, with the kind of the don't take advice, it's the best advice actually, maybe on this particular point is go become a deep domain expert in something that you are fascinated by and obsessed with. And from that vantage point, you will be able to see the opportunities for what to work on. Actually check out Paul Graham's latest essay, how to do great work. He has a lot of good points along these lines and in his other essays, as well. But yeah, go just learn a lot about something that you are fascinated by. And then you will be able to see the opportunities that maybe nobody else can see even if you can't legibly explain at the beginning why they are great opportunities, and just go after that.

Lucas: Good. Sounds fun.

BE: Yeah, I think that was some great. Don't give too much advice to round off on. So yeah, thank you so much, Jason, for joining us. And I'm really excited to release this episode. I'll let people listen to it. Thank you so much, and I hope to see you at a Vision Weekend.

JC: Yep, looking forward to being there. Thanks so much for having me on. Always a fun time. I have so much respect for what foresight Institute is doing. So keep up the great work and thanks again.

Read