In this episode of the Existential Hope Podcast, Ed Finn, founding director of ASUโs Center for Science and the Imagination, explores the impact of storytelling on our ability to envision and create better futuresโand why we urgently need more hopeful narratives.
Ed shares his journey from a generalist interested in how technology shapes culture to co-creating initiatives like "Project Hieroglyph" with celebrated sci-fi author Neal Stephenson. He argues that our collective imagination is often stuck in dystopian loops or unable to escape the status quo, hindering our capacity for large-scale, positive change. By bringing together storytellers, scientists, and artists, we can craft "technically grounded, hopeful stories about futures we might actually want to live in."
In this conversation, we explore:
โ
Instead of utopia, Ed believes in protopiaโa future that gets a little better over time, with real debate and complexity. Stories, in his view, are essential infrastructure: they help us imagine new possibilities, feel our way into unfamiliar futures, and stay accountable to what we create. Hope, for him, comes not from perfection, but from our shared capacity to care, collaborate, and continue imagining better.
โ
Ed Finn is the founding director of the Center for Science and the Imagination at Arizona State University, where he explores how storytelling shapes our collective future. With a background in literature, journalism, and digital humanities, his work bridges science, technology, and the arts to create more imaginative, inclusive visions of tomorrow. He is co-editor of Hieroglyph: Stories and Visions for a Better Future and lead editor of Frankenstein: Annotated for Scientists, Engineers, and Creators of All Kinds.
Imagen is a GenAI tool from Gemini.
Instead of utopia, Ed believes in protopiaโa future that gets a little better over time, with real debate and complexity. Stories, in his view, are essential infrastructure: they help us imagine new possibilities, feel our way into unfamiliar futures, and stay accountable to what we create. Hope, for him, comes not from perfection, but from our shared capacity to care, collaborate, and continue imagining better.
โ
Beatrice Erkers: Great. Well, so Ed, I'm very happy to have you here. Thank you so much for joining our podcast. I think before we dive into your ideas, which I'm very excited about โ you know, you've done a lot of really interesting work over the years that I'm happy and excited to dive into โ but I would also just love to hear if you could tell us a bit about yourself. Like who are you? What are you working on? How did you get started?
Ed Finn: You're starting with the big questions, Beatrice. Thank you for having me. It's very nice to be here. I appreciate it. I don't know. I wonder who I am, you know, every day. So, I guess one answer to that question is I have always been a sort of incorrigible generalist. I've always been interested in a lot of different things. And I grew up moving around, around the world. My parents were both diplomats, and I had the experience of living in different countries and speaking different languages. And I think that informed my interest in learning about lots of different things and learning early that simple lesson that the world could be otherwise. There are lots of different ways to organize society and lots of different ways to even describe the world. Different languages have different vocabulary and different nuance, and that can actually fundamentally change what you think reality is.
So I brought all of that into my education, and I was the kind of student who was always trying to take all of the classes in all of the different programs. And I ended up optimizing for degrees that gave me a lot of freedom to take lots of different things. So to give you a sense of the kinds of trouble that this got me into, I majored in comparative literature in college, but I also had basically minors in computer science, creative writing, and something called European cultural studies. So I was always interested in lots of different things. As is true with so many people, I never expected to land in this job. And it's only in retrospect that all of the different strange things that I did make sense, right, and fit together into this jigsaw puzzle of my biography.
So I came to... I had a first career in journalism and worked at places like Time Magazine and Slate in these very low-level, entry-level editorial positions. Learned a lot, got my butt kicked a lot in terms of my writing and all sorts of things. And it was a really great experience, but also it was clear that I wasn't making it as a journalist. I should probably try to do something else. So I went back to graduate school and got a degree in English, which again was a sort of fungible degree for me that I wasn't really... I never really wanted to be an English professor, but I knew I was interested in how reading and writing is changing and how technology is having this huge impact on these fundamental human cultural practices that we, at least at the time, presumed to be stable across time. And I think it's increasingly obvious are not stable and are radically changing because of our technologies.
So I did that. And then I ended up here in Arizona, moved here because my wife got a job here, and I applied for this one-year fellowship at Arizona State University called University Innovation Fellow, which was full of buzzwords. It was totally unclear what this job was going to be. And in the third week on the job, my boss came and said, "Hey, the president of ASU, Michael Crow, just had this conversation with Neal Stephenson, the science fiction writer. Have you ever heard of him?" And I'm like, "Yeah, read everything he's ever written." And they started talking about how our relationship with the future is broken. Neil wrote this essay called "Innovation Starvation" about how he grew up with the Apollo program, with big national infrastructure projects, with... you know, we were going to be going to Mars next. The future was really bright. And when he was writing this in 2010, 2011, we weren't even flying our own astronauts into space. We were paying the Russians to do that. And the bright young minds were no longer going into these generational, huge-impact projects. They were going to make better Facebook ads or better mortgage-backed derivatives on Wall Street. You know, this really sort of incremental, short-term kind of innovation.
And so his essay was this polemic: why can't we do big stuff anymore? Why can't we think big? And so that landed on my desk. Well, at first, Michael Crow in this conversation โ apparently I wasn't there โ but his response was, "Well, you could blame the entrepreneurs and the scientists and the innovators that they're not thinking big, but maybe this is actually your fault, Neil. You know, maybe I've read your books. They're kind of dystopian. And maybe we need to be telling more hopeful stories about the future that can inspire that kind of large-scale, long-term thinking and doing big stuff." And so that conversation struck a chord. And so I was asked to write up this memo: What would we do at ASU if we were going to do something about this problem? Which seemed so far-fetched and outlandish to me that I thought it was just a sort of test to see if this new guy could do something, if I could write a memo.
So I had a really fun afternoon where I just made up... I found this quote from Albert Einstein, "Imagination is more important than knowledge." And I started thinking, okay, well, what if we brought the storytellers and the artists together with researchers, technologists, and other kinds of practical innovators to imagine technically grounded, hopeful stories about futures that we might actually want to live in? And I thought this was just a thought experiment, and for a while it was, but it became clear that this had struck a chord at ASU, and this is something that I think President Crow wanted to do. He's a huge science fiction fan, not in the traditional sense of a fan, but in the sense of somebody who's really interested in organizational management and futures and institutional design. And he sees storytelling as a really effective tool for anticipation, for foresight, and for governance, like deliberating about what kind of futures we want. And Neal Stephenson was interested. I got to fly up and meet him and have lunch with him, which was amazing, and then start collaborating with him in the early years of the center.
And that's what happened. We actually made this thing happen. And so my interest in the many ways in which we can imagine futures, the many stories that we tell about the world, my interest in cross-pollinating and finding parallels and intersections across many different fields, my fundamental curiosity about everything all turned out to be helpful starting points for this thing that I'm doing now. I don't know if that's really... if I've answered the question you asked me, but...
Beatrice Erkers: I think you've given a wonderful answer. It's really interesting to hear about... it was a good story arc, I think, also from the beginning to sort of... you left me at a good place to take off, I think, in talking about Project Hieroglyph, which is where I first heard about your work. And that's what I think that a lot of the work that you did with Neal Stephenson really picked up on these ideas. So maybe you could tell us a bit, like, what was Project Hieroglyph? The purpose of it, and yeah... then I have some follow-up questions on that, I think, but let's start with just, what was this project?
Ed Finn: Absolutely. So from that very same conversation that Neal Stephenson and Michael Crow had, Neal went off and started thinking about what he might do to address this question. And he started talking to some of his fellow science fiction writers, technologists, other people in his networks. And he started to come up with or put together this idea for an anthology of hopeful science fiction that would address this question of what could we come up with, big ambitious ideas that might drive real change? And for Neal, that really centered on the metaphor that he created with some of these folks in this network of the "hieroglyph." That if you look at the history of science fiction and especially the feedback loop between science fiction and real technological innovation, there are a few ideas that emerged first in science fiction that became really inspirational for real technological development, like the rocket ship, the submarine, arguably the radio. Arthur C. Clarke came up with the geosynchronous satellite. There are many examples of this.
And Stephenson's argument was we could call these ideas, these concepts, hieroglyphs. So they're idealized versions of future technologies. And Neal's line that I quote all the time is, "A good science fiction story can save you hundreds of hours of PowerPoint presentations and meetings because it literally puts everybody on the same page." So you can read a story about rocket ships by Jules Verne or Robert Heinlein or somebody like that and you can get this concept. And indeed, some of these concepts travel across many different writers and they get picked up, like Ursula K. Le Guin's Ansible has been used by a number of other writers โ this idea of an instantaneous communication device across the stars. That idea can then become not exactly a blueprint, but a telos, an aspirational goal. And that can drive lots of innovation and many cycles of innovation.
So that was the starting point for Hieroglyph. And Neal began to recruit writers and other participants. And as that evolved and our plans at ASU started to take shape, it made sense for our new Center for Science and the Imagination, which we founded in 2012, to become the institutional home for that project. So I became the co-editor of this volume with Kathryn Cramer because Neal Stephenson didn't really want to become the editor of this. I think he wanted to write; he didn't really want to edit. And he wrote a wonderful novella-length piece for that book. We also, this was our first attempt to figure out how we create the right structure for this kind of collaborative discussion. How do you get these people into the room together or create a scaffold for a science fiction writer and an electrical engineer and a philosopher, just to pick three random examples? How do you get all these people with very different perspectives to have a productive conversation about a possible future and maybe come to some kind of loose consensus or a shared vision of what an interesting version of that future might be?
So in Hieroglyph, we created this website, which has been deprecated since then, but there were all these amazing conversations where these really well-known science fiction writers were interacting with leading researchers in different fields and also students, undergraduates at ASU and other places, to speculate and kick around ideas about different futures. And over the course of the next couple of years, Hieroglyph became a book project, but also, as we termed it, this kind of invitation to many other people to engage in this fundamental idea, this fundamental premise that we can imagine more hopeful futures and that those might actually drive real change or open up new possibilities.
Beatrice Erkers: Yeah, I think so. In Hieroglyph, you focused a lot on the hard science and hard science fiction, basically. What do you think, because that's something that I feel like today, a lot of people maybe believe that things are scientifically feasible, but they almost have very little hope for social change, is a sense that maybe I and many others have. Do you think that we should do a Hieroglyph for more socially ambitious futures, or is there anything else that maybe if you did this project today, how would you do it differently?
Ed Finn: That's a great question and sort of multi-part. So first, yes, I completely agree that we should be doing more Hieroglyph-like projects around societal change. And some people are doing those. There's actually an anthology coming out later this year with some people who are in the CSI network. I believe it's called We Will Rise Again, which is about social activism and science fiction. So that's just one example, and I'm sure there are others. Ursula K. Le Guin, who I mentioned before, is an amazing science fiction writer of that kind of social change, right? And thinking about society itself, language, culture as technologies, some of our oldest technologies, and recognizing that, again, the world could be otherwise, that those things can change.
One thing that we have kept from Hieroglyph, that's always a part of our projects, is we try to explore multiple possible futures. We're not in the crystal ball business and we're not trying to engage in prediction or what a sort of traditional foresight and futures approach where we say, "Okay, well, we think this is the most likely prospect. We're going to focus on this scenario or this small subset of scenarios because we think this is what's most likely to happen." We're more interested in what should happen. And so sometimes I use the framing of possible, plausible, preferable. And that you want futures that are technically possible. You want futures that are plausible, by which I usually mean that are compelling, believable. And that's as much about storytelling craft as it is about technical fidelity. And then preferable futures means, would we actually... is there some reasonable argument that this is going to make the world better? You know, that this is a world we want to live in.
Now, all of those are guidelines more than rules. And that's one thing we learned early on with Hieroglyph, is that you can't require creative writers to follow a particular set of guidelines. I mean, you can, but you're really taking away a huge part of the spark and creative autonomy that makes fiction interesting. And we've always leaned more on the side of the power of storytelling as an art form that can then be informed by all of these other fields, but that you need to leave creative freedom at the heart of what you're doing.
So we had this set of three guidelines that I forget exactly how we came up with them. I think Neal was a central part of it. But one version of this was "no hackers, no hyperspace, no holocausts" as these sort of fundamental guidelines for the writers in the Hieroglyph project. And we don't explicitly... we don't stick with that particular formulation usually, but the premise of possible, plausible, preferable, it really comes to the same thing in a more slightly more abstracted way.
So one thing I want to say about Hieroglyph is that there are some stories that really grapple with this question of social change. So one that sticks with me is by the Canadian writer Karl Schroeder. He wrote a story called "Degrees of Freedom," and his logline or one-liner about the story was, "In the future, we imagine all kinds of technological change, but we still think it's going to be Captain Kirk in his command chair making a gut call about what to do," right? Just exactly what you're saying, that we think decision-making will stay the same. And I think our present global political moment shows us that we need to get better at decision-making. We need to think about how we can use our technologies to become more inclusive, engage more people in not just in particular decisions, but in the shared imagination of governance, civic participation, and feeling like we are all working for our collective best interests, right, rather than these sort of cynical and skeptical perspectives that lead to a lot of bad outcomes and poor decision-making. So there is a spread, right? There's a diversity of approaches to the future in that original collection. And that's another thing that we carry forward today.
But let me briefly explain what hackers, hyperspace, and holocaust mean. "No hyperspace" means no magical technologies, right? No faster-than-light travel or other things that seem to really defy the fundamental laws of physics as we understand them. "No holocausts" means let's avoid the easy apocalypse, which is the defining characteristic of so much commercial and popular science fiction. Apocalypse narratives or post-apocalypse narratives are really compelling. They can make for great and very commercially efficient entertainment because all you need to draw is a few zombies and a dead hellscape. And they're exciting, right? The explosions when the world blows up are very pretty to watch. Everybody likes to watch them. So there's something very compelling about that and that feeds a deep need, I think, in humans. But we have a lot of those stories, and we don't have very many stories about positive futures.
And so the last one, "No Hackers," was the idea that when people do imagine radical change, one of the most familiar tropes is the small band of scrappy outsiders who defy the conventions and break the rules and cut their way through the systems and the red tape to achieve some kind of transformative change. And that narrative also is not that realistic, not that plausible. Sometimes it's true, but actual change in the real world often is slower and based on collaboration and it's working within systems. It's not always about revolution. Sometimes it's about reform. And that was interestingly the guideline that was hardest for storytellers to follow, right? Because those stories... you know, a story of a bureaucracy evolving is a really boring story. Nobody wants to read that. So it can be hard to engage people. And again, you have this balance between storytelling and these larger goals that we have, right? That we're trying to actually change our collective relationship with the future.
So I could talk more about what we've done, how we've evolved our model since Hieroglyph, but I've been talking for a long time, so I'm going to give you a chance.
Beatrice Erkers: Yeah, I think that one part of this that would be really interesting to dive into is the overarching theme, which is the emphasis on the importance of storytelling. Storytelling is important and valuable in our society and institutions, maybe. So yeah, do you have a take on why you think storytelling is important? Because, you know, it's very easy to argue that it's nonsense. And I think that it's also maybe easy to just say vague words about, "Yeah, storytelling is important." But it would be interesting to sort of break it down, like what you've dedicated large parts of your career towards this. So why do you think that storytelling is important?
Ed Finn: So let's start with the fundamental axioms that drive my work and the work we do at CSI. One of them is that I think we are storytelling animals. We understand the universe through narrative. And this for me is deeply tied up with my understanding of imagination. Imagination is, you can think of it as this kind of mental holodeck from Star Trek in your head. Cognitive science is showing that the same parts of your brain, like the hippocampus, that activate when you're accessing your memories, are also activated when you're thinking about the future, when you imagine a future scenario. So there's this simulation engine inside your head, right, that you're using to model the past and the future. And this is part of the traditional understanding of imagination, from philosophy, from the arts, is imagination is how you think about things that aren't real, or how you see things in your mind that you're not seeing in real life.
And I think that's absolutely true, but I go a little farther. I think that that simulation engine is also operating all the time. We're also modeling the present. We're modeling our experience of the real. Your brain takes in all of this data from your senses and physical experience, but throws out most of it. This is why magic tricks work. This is why political discourse works the way it does, because we only sample that stream, right, a tiny fraction, a tiny percentage of that stream, and we incorporate it into an ongoing narrative of what we think is happening right now. And that story is constructed, and I don't know all of the details about how this works. I don't think anybody does. It's a fascinating topic of active research, but we are constructing an ongoing story about what is happening right now in the world. And that story includes a model of us, a model of yourself. You have a self-perception that you're building into that model, and you're also modeling how you think other people are feeling, where they are, everything from proprioception and physical location and mapping, geospatial stuff to emotional and social cues, body language. All of this stuff is happening largely subconsciously.
I think it's helpful to think of all of this through the lens of storytelling because so much of our cognitive processing happens through language. So of course, some of these things are not linguistic. I mentioned emotions, I mentioned body language, but we construct narratives, and there's this sort of instinctive... I think some of those narratives maybe are visual, like you construct a scene in your mind, but language is the most effective tool we have to compress and share these kinds of simulations of the mind. You know, we all exist in this gulf. Every mind is alone. And yet somehow we can bridge that gulf through language. And other things too, like music, but it's much harder to... all of the sort of technology of civilization moves through language, right? And moves really, I think, through storytelling.
So that's the floor for me, that's the basement, right, where all of the equipment is, the machinery is. And so I think it's stories all the way down, at least in terms of human experience, our experience of the world. And what is really striking to me, and again, this connection between imagination and narrative, is that we're so reliant on this system that we forget that it exists. We mistake the simulation for the real. And imagination is like electricity. You walk into the room, you flip on the switch, you don't think about the thousands of miles of wiring and power stations and all of this technology that's required for the light to come on. And you only notice electricity when it's not there, when something is broken. And in the same way, we tend to only notice imagination when we have failures of imagination, crises of imagination. But the real crisis of imagination, the failure of imagination, is that we cannot escape from imagining the status quo. We invest so much of our collective energy in shoring up our perception, our narratives of the world as it exists right now, but it's all made up. The stock market, healthcare, credit cards, we invented all of it. And we give all of these things power by imagining them and by continually re-imagining them.
So, coming back to Le Guin again, other worlds are possible. And the first step is really just to change the story that we're telling. So the second layer then is that there are all of these deeply entrenched narratives that define what we think is possible. They define our understanding of reality and they define what we think the boundaries or the horizons of possibility are. And so making imagination more visible, making storytelling more visible as an ongoing active process that we're all participating in is the first step to trying to change the future. If you want to change the future, you have to change the story that you tell about the future.
Beatrice Erkers: So it seems like there are a lot of spaces where you should maybe be telling better stories. What do you think is the comparative advantage of fiction writing or something like that to actually moving the needle? Because it seems like, for example, I'm sure there's storytelling in many different aspects that we're doing, like you mentioned, implicitly, constantly. But so this type of work, I'm thinking about it a bit selfishly now because I'm working with this existential hope program, and we're also doing similar things in this vein. So how do you think we can leverage that type of storytelling the most?
Ed Finn: And I'm excited that you're doing that work, by the way, and welcome. It's good to have more allies in this and fellow travelers in thinking about the future in this way. So a few things. And I wrote an essay about this called "Step into the Free and Infinite Laboratory of the Mind" that might be interesting for anyone who wants to dig a little deeper.
One benefit of speculative fiction is that it's make-believe, it's pretend, and that allows people to step into a future and address questions that they don't feel ready to talk about in the real. So when you have a technical expert or a policy expert, a lot of people are so entrenched in their specializations and sometimes, especially in things like policy, where there's this kind of trench warfare modality, right? People are so limited in their imagination of possible futures that it's basically impossible to get them to think about transformational change unless you begin with this premise that this is just pretend. And we have a variety of techniques in the work that we do to try to encourage people to take some creative leaps, to take more creative risks, and to let down their guard a little bit or step out of their sort of traditional intellectual armor and try to explore a broader possibility space. So speculative fiction can be really beneficial in that way that it invites everybody into a different future that is outside of their specialization.
A second benefit is that speculative fiction is broadly accessible. Most of us are pretty good at stories, at interpreting and understanding stories, at least on a basic level. Not everybody has a PhD in a literary field or anything like that, obviously. But we all have a lot of practice with stories because they are such a fundamental unit of cultural exchange. So we're much better at stories as a species than we are at probabilistic risk assessment or systems thinking or all of these other technical skills that are needed to do more quantitative approaches to futures and exploring possibility spaces.
Another interesting benefit of speculative fiction is that the experts themselves... you can't solve for the future by only working one field at a time, one sub-sub-discipline at a time, right? So, if we're going to address these major challenges we're facing, like finding a positive path forward for artificial intelligence, climate change, growing wealth inequality, all of these things that may or may not be existential threats to humanity in the century to come, you're not going to solve them by working in traditional academic disciplines or even just working on one small business case at a time. You need people from many different fields to work together, but those experts often are not very good at communicating. They don't have shared vocabulary and they don't know how to extrapolate in a collective way, right? They can only extrapolate down their lane, down that one subspecialty. And so, speculative fiction takes everybody out of their comfort zone a little bit because you're no longer working in your own particular zone and you're no longer using words in the way that they're used by your peers and your sub-discipline. One of the great things that happens in our collaborations is when a science fiction writer asks a question like, "Well, what do you actually mean by energy?" or some other fundamental term that people throw around in a disciplinary context.
So everybody can participate in the future, including all of these different experts and also members of broader publics, right? Because again, the story is more accessible. So we'll bring together a group of people to come up with a technically grounded, possible, plausible, preferable future. But that's just the beginning because then there's this narrative that includes human characters. It's an actual story. And that allows us to not just think about the future, but also to feel that future, to step into the shoes of people living in that future and answer questions like, "Well, what room of the house is this new technology going to be in and who pays for it? And what happens if you drop a piece of toast into it?" And all of these things allow us to kick the tires on this future in a much more engaging and accessible way. And also then to have a constructive debate because now you've got this room and you've got furniture in the room and now everybody can have a discussion and say, "Well, I think this couch is in the wrong place. This piece of this story feels really implausible," or, "I read this and I don't want to be in this future. This future made me feel really uncomfortable or sad." And now we can talk about it because we have this shared vocabulary. We have this shared space.
Another analogy I like to use is that a good... and this applies to all stories, obviously, but especially useful for speculative fiction... that these are microcosms. They're like compressed file formats of entire cognitive simulations. And we're very good at unpacking them in our heads and projecting them and running the film forward. Once you've read a great science fiction story, you can look around the corner in your mind. You've got a copy of the simulation in your own head. And you can say, "Okay, well, what if these two characters sat down for tea together?" Or "What would be different if this thing happened?" You can start to extrapolate and extend the model on your own. And that model embeds not just the things of the future โ the objects, technology, stuff like that โ but causal relationships, causal models. It includes a kind of rule set, and different genres of storytelling include different rule sets. And that again can be really powerful to say, "You know, the rules of this future society are really different from our present." And that blows my mind because it never occurred to me that we could actually change that rule.
Beatrice Erkers: Yeah, that's a really good deep dive, I think. Maybe it's because I've just started digging into this whole universe a bit more, but my sense is that I also see more and more people doing something akin to this, at least if you count scenario planning or something like that as a type of storytelling about the future. We've done some of it at Foresight with the existential hope program. There was this AI 2027 scenario recently, I guess that was like a forecasting/scenario planning thing. I know a lot of think tanks are doing it. There are intelligence rising workshops, I think, doing it. I heard even DeepMind is doing some stuff like this now. Do you agree with my analysis that we seem to be seeing more of it now? And if so, do you have any theory as to why it's having a moment?
Ed Finn: It's because we've been so successful at changing the discourse! No, I think that we have contributed to this broader movement. I think it's because... I think that there are a few reasons that this is happening. One is that the pace of change is accelerating and we are starting to recognize that we need better tools to anticipate and to discuss the future. And we need tools that we can deploy more quickly. And one of the other things about narratives, going back to my compressed file format, is that they're incredibly efficient. They're really cheap to come up with. You can actually construct a future like the AI 2027 future. You can do that with coffee and bagels. Or maybe, I think they did some quantitative analysis and research as well, but it's still much cheaper than building a giant computer model of a weather system or something like that. And it can be a really efficient way to explore part of the possibility space.
And so I think people are becoming more aware of the utility of that approach and also, I would argue, the necessity of that approach because we're at a moment in human history where our technical reach extends so far beyond our cultural, societal, ethical grasp that we don't have the words to describe the things that we can do. And science and science fiction are cousins in the sense that they are both engaged fundamentally in the practice of coming up with new words to talk about new things, new things that we could do, new things that we can observe about the universe. And this is part of why there's this vibrant feedback loop between science and science fiction. There's this sort of exchange of vocabulary and ideas that goes back and forth.
And so I think there's a recognition that we need to get better at telling stories about the future. And interestingly, science fiction has often avoided the near future because the near future is so easy to get proven wrong, right? And in some ways, it can feel a little bit boring. It's much more fun to imagine Star Trek five centuries from now because then you can have faster-than-light travel and all kinds of cool new technologies and you don't have to deal with contemporary politics in the same way that you would if you're imagining a story 30 years from now. But we need to get much better at telling stories about the next few decades of human existence because it's clear that we're at a series of really important junctures in terms of decisions we make about how we're going to govern technologies and how we're going to collectively respond to the problems that we've created for ourselves. So we need better stories that help us connect the decisions we're making in the present to those preferable futures.
So I think that's the larger... that's my take on the larger backdrop of why this is happening the way it is. I also think that this problem of proliferating specialization... I mean, obviously specialization is important and it's unlocked so much of the incredible progress and achievement that we've made as a species. So I'm not saying that we should all stop being specialists, but by focusing only on that and not encouraging people to also maintain some generalist perspectives and valuing that kind of generalist perspective through things like storytelling, we run the risk of losing the forest for the trees, right? Or creating these technologies and completely missing their consequences or their unintended consequences because the people designing them just really didn't stop to think about what was likely to happen. And so just a little bit of practice, a little bit of this kind of, you could think of it almost as futures inoculation, can be really powerful because it gives people the seeds and the basic cognitive tools to start asking, "Okay, well, we can do this, should we do this?"
Beatrice Erkers: Yeah, that's really interesting. I agree also, I think it is the pace of change that it's happening so fast. My main fear was also AI and that it's just like recent AI developments happening so fast. And also even near-term futures are hard to predict, and so it just makes it feel very relevant to think about this. Also on your second point, that's also a sense that I've gotten lately, is that transdisciplinarity is having a bit of a moment as well. Like it is, at least, being valued a bit more highly than maybe it has for the last few decades. Basing that mostly on... yeah, I was at the ARIA summit here in the UK the other day and that's really a core point of their program and they're funding a lot of things. I think also just this point of with current AI developments, we have just... the generalists have even more agency in a lot of ways now because we can get more done with all these tools. And there's also, I was listening to a podcast with Tyler Cowen and Jack Clark from Anthropic where they were discussing whether maybe humanities, the humanities, are going to be really up-valued now. Do you have any thoughts on that as well?
Ed Finn: Yeah, I've been thinking about that too. And I think that's true. I haven't listened to them make that argument. So one thing that I find very striking is we spent decades trying to digitize and binarize the universe into ones and zeros. And we thought this was going to be the way that we achieved breakthroughs in artificial intelligence and sort of higher-level reasoning and learning with machines. And I find it ironic and hilarious that all of these breakthroughs have been predicated on language, large language models, and saying, "Okay, instead of thinking of this as a sea of ones and zeros..." And I'm waving my hands a little bit here, and hey, my PhD is in English, so I'm sure some of the audience can critique me on this. But yes, of course, it's still ones and zeros, but by changing the framing from thinking of this as binary data to saying, "We need to think of this as language." And we're to think about X-rays as language. And we're going to think about, I don't know, any dataset you want, neutrinos or something like that. We're thinking about all of these, or let's say messages from the stars, radio frequencies and radio telescopes, using language models and all this stuff is actually unlocking all of these things that we thought we were going to achieve by turning it all into binary data instead.
So there's this baseline level at which that's really interesting to say, "We're achieving these breakthroughs in creative expression, in language, in conversation." But we're also arriving at this moment where the critical faculties that the humanities and, you could call it, the classical liberal arts education try to teach people are really important. And those are not just about the language and expression; it's about critical thinking, it's about close reading and the idea of understanding the interpretive frame in which you're working. It's about reflecting on the human experience and centering the human, which seems, I don't know, maybe trite or simplistic, but it's actually really profound. It's the one... it's the major gap, right? The idea of experience. Experience is this huge word. What does it mean to have experience, to experience things as a human?
And the version of human cultural outputs that we get from a tool like ChatGPT or Claude or whatever is like... it's like it's taken all of human culture and chopped it up, freeze-dried it, and chopped it up in these tiny little bits and sort of blended it together into this smoothie that looks sort of like the original thing. It's all people in there, as Jaron Lanier likes to say, but it's not any one person. There's no person on the other side of the conversation. And our brains are so queued up to look for that person. So the humanities and liberal arts perspectives are really helpful in giving us the cognitive tools to imagine what is happening with the words that are our medium of interaction with so many of these tools, to imagine what is going on on the other side of the screen, because it's the most dangerous fallacy we live with right now is that there's something like a human, that there's something like a person on the other side of the screen, which is just not true, at least right now.
And maybe most importantly, it helps us imagine ourselves in a richer and a deeper way. This is the fundamental question that the humanities seek to answer: What is it to be a human? What is the human experience? What are the boundaries of our identities, ourselves? What is the meaning of life? These are all questions that you cannot answer with ChatGPT. You can ask ChatGPT and it will give you answers, but imagining ourselves, having a vibrant self-imagination is a hugely overlooked prerequisite, I think, to becoming effective collaborators with these machines. Because if you don't have that, then it's just this long slippery slope towards being the button pusher, right? You're just pushing the button and you're having the machine give you the answers. So I was just having a conversation about this with a former student yesterday, thinking about the Socratic method, right? Having, using these tools to do the things they are good at, which is to remember and to express values. Humans are really good at coming up with values. We're bad at consistently maintaining them. Machines are really good at being consistent. And so if we could get, design our tools to be better at reminding us to be better people and ask us questions instead of giving us answers, I think we'd be better off.
Beatrice Erkers: Yeah, I think if we think about the best-case scenario of what maybe AI could help us achieve now is helping us solve a lot of our easy-to-define problems, like health or finance or just these sort of physical basics, and then we'll be left with all the gray zone philosophical stuff that we need to figure out as well, I think.
So one thing that I'd also love to just get to talk a bit with you on is that you did this work on, it's like a re-publishing of the Frankenstein novel, I believe, but with annotations. And I thought that just seemed like such an interesting project. Could you just explain a bit what that was? Because I think there's a lot of... the Frankenstein story is just such a good story to come back to when you work in the field of thinking about technology at all. And so I know that this was a really big part of how you were approaching it as well.
Ed Finn: Absolutely, and I could go on about Frankenstein forever, so I'm going to try to be brief. First, it's worth asking how this story that's only 200 years old has become so widely permeated our collective cultural consciousness. At least in the West and the global North, Frankenstein is so omnipresent that you don't even need to use the whole word. You can just use "Franken" and put it in front of something โ Frankenfood, Frankenscience โ and you immediately create this cassette of questions and a cultural context. And Frankenstein, even if you've never read Mary Shelley's novel, you know this story because you've seen it in a hundred movies and cartoons and TV shows. You've seen it on a lunchbox. You've eaten the breakfast cereal. You've dressed as the monster for Halloween. It's everywhere. And so it rivals these much older cultural myths like Cinderella that are thousands of years old that have the same level of sort of broad cultural presence.
So why does Frankenstein become so successful? And I think it's because it emerges just at the cusp of the modern scientific revolution in the early 1800s. And Mary Shelley is observing early scientific experiments in galvanism and electricity. She is reading philosophers, political philosophers, natural philosophers. She's engaging with this intellectual elite of the era. Her biography is incredible. It's amazing that this woman, this young teenage... this teenage woman in England was able to write and publish this book and that it became this huge international phenomenon that she made almost no money from. Her life is actually really tragic and amazing. So if you don't know anything about Mary Shelley, go learn about her.
But our approach with this book, with this background, was to say, "Well, this isn't a history experience. This isn't an exercise in what happened 200 years ago. What can Frankenstein teach us about the next 200 years?" Because the fundamental questions that Mary Shelley was asking, which were science fiction in her time โ and many people argue that this was actually the first science fiction novel โ Frankenstein, the character, Victor Frankenstein, predates the first use of the word "scientist" by almost 20 years. So before we had the word for scientist, we had this archetype of the mad scientist, the flawed creator, right? So just think about that. It's kind of amazing.
So the things that were totally speculative are now happening every year. There are high school students creating artificial life, genetically engineered organisms. We're talking about AI. We're talking about robotics, all of these different arenas in which we can now do the things that Victor Frankenstein was speculatively doing in the story. And we approach this not as a cautionary tale. You know, that one reading of Frankenstein is just to say, "This is what happens if you mess with these things that humans shouldn't mess with. If you play God, it's going to go badly for you." But our take on the story is that it's actually about scientific creativity and responsibility. Victor Frankenstein is a deeply flawed character, but his big mistake is not so much creating this creature as turning his back on the creature, failing to take responsibility, and failing to be a good parent. Right. And this is like Alan Turing talking about how if we're going to create intelligent machines, we need to be like parents to them.
And so that kind of responsibility-taking is something that happens at every creative act. And creativity for Mary Shelley was about biological creativity and child-rearing. It was about intellectual creativity and birthing this book. It was about the idea of making something in the world. And we're at this moment now where we are also creating these tools that are themselves creative, right? Or we're wrestling with how true that might or might not be. So the project for us is about what we can learn from this story as we are wrestling with these fundamental questions about agency, about responsibility. And a lot of that ultimately comes down to this question of society. What do we share? And one of the reasons that Victor Frankenstein goes off the rails is that he rejects society. He doesn't seek advice or counsel. He just goes off and does this thing on his own. And then the creature also gets rejected by society, and the creature doesn't start off evil; it becomes a psychopathic killer because of these repeated rejections and cruel treatment by humans. And so the story... some of the key lessons we need to reflect on now are how do we create shared narratives about scientific creativity and responsibility that center hope, but also center that sense of duty and obligation to ourselves, to the things we're creating, and to one another.
Beatrice Erkers: Yeah, it's just such a present... it's an eternal dilemma and it feels so present right now also in terms of, again, AI is what I'm thinking about a lot, but also just if we think about potentially having sentience in AI systems and so on, but also in any science and technology that we're advancing. I remember talking to Christine Petersen, the co-founder of the Foresight Institute, who... because Foresight actually, I was surprised by this, but when you look back in our archives, even though at Foresight it's always been like, "We were super excited about technology and we really think that the future is better with a high-tech future is the more better future," but there's also always been discussion on the risks and so on. And that is something that I remember stuck with me that Christine was saying, "Well, we always also try to sort of instill in the scientists and the technologists that we work with that you are responsible for your creation." And then I guess it's hard to say exactly what that means because does that mean you're legally liable for everything that happens if you created something? But yeah, it's such an interesting question. Yeah, I highly recommend this book. It's called Frankenstein Annotated for Scientists, Engineers, and Creators of All Kinds. So yeah.
Ed Finn: Thank you. And that is... you can buy the book from MIT Press. It's also, the whole book is available online at frankenbook.org with some short videos and other media and even more annotations online. And we're also starting a new book series with MIT Press where we're going to annotate other works of science fiction. So that's called Imagination Annotated. And the first volume will be Jules Verne's From the Earth to the Moon. It's not out yet. It's not going to be out for a while, I think, but keep your eyes out for that. We're very excited about starting this new series.
Beatrice Erkers: That sounds super interesting. I will definitely keep an eye on that and we'll try to share about it as a link to this episode. I guess on that, it would be interesting also, do you think anything about... Because now I think we've discussed books mostly, you and I today, but if we think about formats and just like, maybe if we only publish books, we limit who reads these things or who gets access to this information. Do you have any other formats in terms of this type of storytelling that you're excited about?
Ed Finn: Yes, so I completely agree. And I think that the most popular and widely traveled visions of the future come through film and visual media. And when you think about AI, what are the stories that people talk about? They talk about Terminator. They talk about The Matrix. They talk about Ex Machina, things like that. And sometimes Frankenstein, and R.U.R., things like that. I think that we need to get much better at telling hopeful stories in film. This is... I have a collaborator, we've been talking about also the incredibly pervasive dystopia in video games. And if you think about the billions and billions of hours people pour into games and spend time in these futures that are really dystopian, what would it mean to create more positive narratives in that format?
So I think this is really important. I think that the commercial headwinds against hopeful futures and hopeful storytelling in these visual entertainment media, they're pretty significant. And so I think it's hard, but I think it's really important work. It's something that we're actively engaged in, but we're just at the beginning of. So we did create this really great short film as a sort of a book trailer, but also a narrative, a little story. It's a six-minute film called "The Assignment" for our Climate Futures book, which is at climatealmanac.org; it's the Climate Action Almanac. And that's also coming out with MIT later this year as a book, a new edition called Climate Imagination. But I'll send you the link for that trailer. I think it's really lovely and it captures some of the central ideas of the premise, which you'll recognize as the same thing I've been talking about this whole time, in a story about college students and their climate anxiety and how do you change the narrative and change our affective relationship with this problem. So I think it's hugely important. I want to be doing more with it. We could have a whole longer conversation about why it is that so many of the stories that we tell in movies and TV shows are pretty cynical about the future. I think we could do better.
Beatrice Erkers: Yeah, I agree too, definitely. And I think there are some examples, but none that have hit it big in terms of positive futures. So it still feels like it's quite niche. So a few more questions just before we run off. One thing that I wanted to ask you also, I think I heard you say in another podcast that, you know, it's not that we should be only consuming utopias, for example. But maybe there's a bit of a diet that we should consider here. Like maybe dystopias are good, you know, like a bit like Frankenstein, maybe it's actually useful to be warned about some things, but then maybe we also need the utopias or at least something more positive than what we have right now. Do you have any sort of thoughts on what would be an ideal diet of fiction like this to consume?
Ed Finn: Yeah, and I want to put in a plug. I think that your audience would really like Future Tense Fiction, which we publish original speculative fiction each month along with a nonfiction response essay by some kind of technical or scientific expert in Issues in Science and Technology, which is the house magazine of the National Academies of Science, Engineering and Medicine. So the reason I mentioned that is that when you read those stories, you're not going to think they're all utopian. And one of the truisms about utopias is that my utopia might be your dystopia. That these things connect together as you go to the extremes of utopia and dystopia. It's very easy to fall from one into the other. And it's actually quite difficult to try to design a perfect society that everybody's going to be happy in. Even just saying the words out loud makes me feel kind of itchy, right? Like who decided, right? And is that really... am I going to like it? And is there any possibility of change?
So instead, I think this idea of protopias and futures where things are getting better is really useful. We have... dystopias are really helpful too. Books like 1984, Brave New World, The Handmaid's Tale are incredibly powerful and they're incredibly important because they are yardsticks that let us clearly measure whether we're inching closer to the futures that we don't want. But we have a lot of yardsticks that measure the futures that we don't want. And we don't have very many yardsticks that measure futures we're trying to get to that are concrete, tangible, plausible, preferable, that we say, "Yeah, that would be really great. Can we get closer to that?" So that's why more hopeful stories about the future are really important. But yeah, I think you need to have a balance and a diet because we're not all going to agree on what a better future is. And that's okay. That's actually good. What we're... if we're not very good at imagining hopeful futures, we're terrible at having conversations about, "Can we come to agreement about what those better futures are?" And so the first step is getting a lot of those ideas on the table and exploring a broader possibility space. Just the exercise of realizing, "You know what? This isn't path dependency. It's not that there's only one future or that a tiny fraction of humanity is going to make all the decisions about what our future is like." That actually there are lots of possibilities and we all have agency to shape what future we get. That fundamental learning from just stepping into a bunch of different possible worlds is really important. That's the first step. And then you can start to have that second-order conversation of, "Well, what did you like about this one? And maybe we could combine pieces of these two futures," or "I don't know that I totally agree with this, but I am going to change something that I'm doing today because I'm inspired by this."
Beatrice Erkers: Maybe this kind of is your answer to the next question, but the next question is like, you know, you're someone who I think has consumed a lot of sci-fi, is my suspicion. What is a vision of the future that makes you really excited? Do you have your favorite existential hope vision?
Ed Finn: That's a good question. I don't have one, just one, that I would say, "This is the future that I want to live in." As a generalist, I'm also... I don't know if chameleon is the right word, but I'm a traveler, right? I like to explore different possibilities. So I guess I like visions of the future where that is possible. So I do... Star Trek is still a yardstick to say, "This is a future where things are getting better." I'm drawn to visions of the future where people have agency and they get to participate. And, in a funny way, I'm also drawn to futures where it's not like we've erased the present. This is one of the... the future is not a changed and unrecognizable place. The future is just a little layer on top of all the geological layers of time and human experience and history that have happened already. So New York in the future is just going to be a New York that's built on top of the New York that exists now, right? So in that regard, some of the futures that I find really inspiring are Kim Stanley Robinson's visions of the future. So, New York 2140 is one example of that, or 2312 imagines this thriving solar system civilization. And in those worlds, things are not perfect. There are lots of problems, but there's this idea that things can keep getting better and history continues to exist. I think that's one of the signs that you're living in a dystopia, is when history has been erased. So those are some of the futures that I find really inspiring.
Beatrice Erkers: I think the last question will be, do you have any recommendations? And this could be books, but it could also be movies, TV shows, just anything that you find inspiring and something that ignites hope in you that you want to share with our audience.
Ed Finn: Well, I've already mentioned some of the stuff that we're doing, right? So that I don't need to mention again. And I only mention it not to toot our horn, because we're trying to... that's the work, right? Is trying to ignite hope for other people. I do find writers like Stan Robinson really inspiring. I like the work of Annalee Newitz. Their futures are really playful, but also, and sometimes kind of dark โ they're not utopian futures โ but they imagine people thriving and finding ways to thrive even in sometimes challenging environments. And I think looking at stories where different configurations of the world exist... so I've mentioned Le Guin a few times. I think one spiritual successor in a really playful way is Becky Chambers. I love her robot and monk stories, some of her recent work.
I'd say though that more important than any particular narrative is reflecting on these sort of genres of experience and the genres of storytelling, and why they speak to you as an individual. Why is it that I like spending time in this world? So I would also say stories like The Expanse are really fun. Again, not a perfect future, but I love stories... I love futures in which politics exist and people are having impassioned debates about the futures that they want in that future. So those are a few examples.
Beatrice Erkers: That's true, politics is probably going to continue to exist. So that's a good note. Yeah, thank you so much. I think that's also a good note to end on. Thank you so much, Ed, for joining us. It was...
Ed Finn: Thank you for having me, this was really fun.