Podcasts

Anastasia Gamick | Raising science ambition: how to identify the highest-impact research for an AI world

about the episode

Most scientists do “safe” research to secure their next grant. But what if more of them worked on the most important problems instead?

In this episode, we talk with Anastasia Gamick, co-founder of Convergent Research, about how to raise our level of ambition for what science can actually achieve.

Convergence Research incubates Focused Research Organizations: small, startup-style teams that build critical “public good” tech, which both academia and for-profits ignore.

We discuss:

  • What makes a research project truly high-impact in view of an AI world
  • Concrete examples of these projects: maps of brain synapses, software that’s provably safe, drug screening, good data for AI-powered scientific research, and more
  • How to prioritize defensive technology, such as biosafety tools, instead of just pushing every frontier as fast as possible
  • How young scientists can find the work that matters most for the future

About Xhope scenario

Xhope scenario

No items found.

Transcript

[00:01:52] Beatrice: Today, I'm really excited to have Anastasia Gamick. I hope I'm pronouncing your... Good. You are the president and co-founder of Convergent Research.

[00:02:02] Anastasia: Yes, that's right.

[00:02:03] Beatrice: Yeah. And the reason I wanted to have you on—we've had Adam, your co-founder, on to talk about all the really great and exciting work that you're doing. But I heard you give a talk recently at the Progress Conference, at the same venue a few months ago, where you talked about how you had actually worked to prioritize the capabilities that you're working on at Convergent Research. And I thought that could be a really interesting topic to dive into.

I also think we should do a little recap of what it is that you're doing at Convergent, of course, for those that haven't heard the Adam episode. But yeah, why don't we just start there? Who are you, and what is Convergent Research?

[00:02:44] Anastasia: Yeah. I am Anastasia Gamick, as you said. My background is almost entirely in startups. I was really early at Neuralink, Elon’s brain-computer interface company. I've worked in robotics. I worked across last-mile payment delivery in Sub-Saharan Africa, helping big NGOs send funds directly to beneficiaries. And Adam and I started Convergent; actually, largely, I was really inspired by Elon Musk, who's here at this Foresight event also.

So, how do you create the mechanisms to help leading scientists take their ideas out of the lab into the real world? And Adam proposed this idea called a Focused Research Organization (FRO). So, this is a startup-inspired R&D project. Think: something like a critical mass of people over a critical mass of time working and acting like a startup. They have a CEO, they have a five-year roadmap, they do quarterly planning, they have OKRs, and they have a multidisciplinary team. They have software engineers, and they have virologists and automation engineers, all working together towards a very specific technical goal.

We look for projects that can't be done in academia, generally because they're a bad fit—because they have a bad dollar-to-paper ratio. It's a lot of engineering work. It's not something where you're going to generate enough publications to justify your next grant cycle or your next grad student. And it's a bad fit for for-profit settings because, as a nonprofit, it's a public good; it's hard to capture that value. So, we call these Focused Research Organizations, and we've launched 10 of them over the last four years.

[00:04:27] Beatrice: Do you want to tell us a bit, like, which are the 10 organizations that you've actually launched? This is a quiz, isn't it? Yeah.

[00:04:34] Anastasia: So, the first two were E11 Bio and Cultivarium. E11 Bio will be very familiar, I think, to the Foresight audience. They're doing high-throughput connectomics, so they're actually building synaptic-level maps of the brain—what neurons talk to what neurons. I guess, even more specifically, though, they're building the tools that let us do that cheaply. They're working on fundamental capabilities—which we'll probably get back to—around brain mapping.

And then we have one called Cultivarium working on non-model organisms. So, how do you take the biodiversity that exists in the world and use all of the different microbes to their fullest potential? Right now, if you find a cool microbe in your backyard—because there's definitely a cool microbe in your backyard—and it produces a protein you like, actually, there's no real way to use that in an industrial lab setting. And so, they’re building the tooling that you need to make those different microbes tractable.

And then we have other projects that span neuromodulation, or building a big telescope that should be able to see the dark matter that's strung between galaxies, or working with single-cell proteomics. These are all projects that we think fundamentally change the way that their fields will work.

[00:05:50] Beatrice: Yeah. And I think you mentioned this: they are time-bound. Like, it's five years normally. What happens now with, like, when did E11 start, for example, and is it closing in on its five years?

[00:06:03] Anastasia: Yeah. E11 has a couple of years left; they're about three years old. They just entered their Phase Two. Yeah, so at the end of these projects—I guess, first on the time-bounded nature—I think that oftentimes open-ended research is this really beautiful and wonderful fundamental part of our R&D ecosystem, but it's very exploratory and you're always coming back for more funding. I think that oftentimes when you talk to philanthropists, one of the things they're concerned about is that if they fund an organization, someone has to fund that organization again next year, and someone has to fund it again the next year.

And so, by having them be time-bound, we've limited the ask on philanthropists. We're not asking you to support an organization forever; we're asking you to support this five- or six-year project. Yeah. And at the end of those projects, many things can happen, and it'll depend really deeply on what that project is. Some may spin off into a for-profit startup; you may be able to capture value someplace in the chain and you can take that piece and create a biotech out of it.

Some may be donated to another nonprofit. There are a bunch of nonprofits out there in the world that kind of operate continuously, and they could take on this. Some could go back to academia. Some could be open-source datasets that are just used. Others could become, like, Contract Research Organizations (CROs). So, I think that each one of the FROs will have a different path forward as they figure out what is the biggest impact they could have with their work.

[00:07:38] Beatrice: And could it be that they deem that it would be to continue, or something like that, as an FRO?

[00:07:45] Anastasia: I think that rather than continue as an FRO, there may be, like, another FRO that follows it. So, I think that it's actually really important that they be time-bound. I think that we would be unhappy with the outcome of, "Let's just tack another two years on. Let's tack another two years on."

Because the point is that there's, like, a very clear product or output at the end of the FRO that we want to exist. E11 is going to make this brain-mapping technology a hundred times cheaper, a thousand times cheaper. And when they do, that's the end of that FRO. They could then say, "Actually, now what I want is to map a whole mouse brain. Now what I want is to do the Human Connectome Project." And you could pick a different project and use the same team towards that other goal. But I wouldn't want to just append a year because I think that it becomes much more like a traditional nonprofit where you're out there every year fundraising for your budget the following year.

[00:08:40] Beatrice: Yeah.

[00:08:40] Anastasia: Where the beauty about FROs is that you're really building, from the get-go, a very clear product. It means you can go to philanthropists, or you can go to the government, and you can say, "This is what needs to exist in the world. This is why. And it's going to take us $30 to $50 million and five to six years to build that specific thing."

[00:09:03] Beatrice: Yeah. I do love the time-bound element. I think it's quite unique, or at least from what I know. And yeah, it avoids this problem of everything needing to be its own, like, forever project.

[00:09:18] Anastasia: And I think that one of the reasons that startups are so successful is that there's a real existential dread—like, you're afraid the whole time that you're not going to raise your next round. I don't think that exists in the same way in industry labs, or it doesn't exist in the same way in academia. So, having this notion that "I have to hit this goal in the next year to raise my next round," I think, is really important and a thing that we're trying to mimic.

[00:09:46] Beatrice: Yeah. So, maybe we dive into then how you came to choose the ones that you choose. And yeah—so, again, to cover the framing a bit for this episode: at your talk at the Progress Conference, you were talking about how to choose capabilities to prepare for the Intelligence Age. And yeah—maybe do you want to just start with saying what you mean by the Intelligence Age?

[00:10:15] Anastasia: Yeah. Adam and I sat down in January of this year, and we thought about the year 2075. And I think we expect a lot to change between now and then. I think that we've seen real progress in AI and real new capabilities. And so, if that goes as well as it possibly can, what does that world look like?

What does it look like when we have leveraged this really amazing new technology for all of its benefits and avoided all of the downsides? And so, we thought about things like: we've avoided big risks. We've avoided AI letting people design bioweapons that cause pandemics. We've avoided a world in which everybody is just using it to hack each other's critical infrastructure, and no one has power and no one has fresh water. We've avoided a world where democracy has collapsed and civil discourse has collapsed. And instead, we have a world where maybe even civil discourse is more humane and more open and more collaborative than it is today.

There's a world where AI and LLMs are building, sort of, confidence and comfort in being human—maybe they have encouraged more interpersonal connection. We've solved all of our... there’s no more lack of materials anymore. We're in a post-scarcity world. We can solve diseases; we can print medicine. We have a Star Trek-style replicator. And clean, abundant energy—there are a couple of others we thought through. Yeah. So we're like, "Wait, so that's the world we want. That's great."

I can't really just make that world. What are the, like, fundamental capabilities? What are the things we need to build today to get there? So, we spend a lot of time thinking about that. What are the seeds of those technologies? If you think about a post-scarcity world, there's positional chemistry, there's nanotech. Can you actually solve many of these critical pieces that you need to build the systems of the future? And—

[00:12:58] Beatrice: But just, I love that practice. And it actually shows—it's something that we've done a lot at the Existential Hope program. We—I mean, we call it world-building. You can call it scenario planning, whatever you want. But I love that you've done that and really put it into action: that's the point, that's why we're doing it, so that we can backcast and think about what should we be doing now in order to make it happen. Yeah, sorry.

[00:13:25] Anastasia: No. And it's—I think that some of this is built on... I think that one of our core things that we think about is this notion of a "fundamental capability."

So, you can think of the world being full of things that we interact with, whether that be computers or medicine, mRNA vaccines, LIDAR—there are things that we use in the world. They're actually based on more basic research. They're based on more basic discoveries or tools. And we want to back out into: what are these enabling technologies? What causes all of these downstream outcomes?

E11 Bio is an example of this where, you know, if you think about understanding how consciousness arises, if you think about neurodegenerative or neuropsychiatric diseases, if you think about neural nets, there is a common understanding that you would want to know across all of these different things, which is the synaptic wiring of a brain. But actually, to make a synaptic wiring of a brain, you need a mapping technology. To make that mapping technology, a big piece of that is called proofreading or error correcting, and that is an incredibly expensive process. That is how we got to the notion of E11 Bio—because we took several technologists together and we said, "If they can build this fundamental capability, it'll enable all of these downstream outcomes." And so, we've looked across that in many different fields and have fellows working across a handful of different fields right now, too.

[00:15:11] Beatrice: Which are—I mean, you don't need to go through all of them, but what are some of the other fundamental capabilities that I'm guessing then you have other Focused Research Organizations maybe working on, or maybe who will start working on it?

[00:15:24] Anastasia: Yeah, we've been spending a lot of time thinking about good epistemics.

[00:15:36] Beatrice: I—

[00:15:37] Anastasia: Think that as a society, there are a bunch of tools that have caused us to potentially have worse epistemics, worse conversations. And so, are there software tools, AI tools that you could build for deliberative democracy, for better conversations? I think that if you look at something like Community Notes on X—or Twitter, now I'm blanking on what it's called.

[00:16:04] Beatrice: Community Notes.

[00:16:05] Anastasia: Yeah, Community Notes. This is actually a really crucial invention.

[00:16:10] Beatrice: Yeah.

[00:16:10] Anastasia: Are there other inventions like that that we could find that would improve the way the discourse is had? So, that is an area that we're spending a lot of time on. We're still very much in the road-mapping stage there. There's a potential project we're really excited about in deliberative democracy based on some work from scientists from Google and from a couple of other organizations.

[00:16:33] Beatrice: That's so exciting. I know that's such a—that's definitely a fundamental capability. I also heard that ARIA were thinking of something similar, and it's exciting that you guys are thinking about that because I think that's definitely something that comes to mind for the layman. Yeah, or something that is a bit challenging right now.

[00:16:51] Anastasia: I think this would be a big part of how our world changes over the next several decades. Nicole Wheeler at ARIA actually came to our workshop that we had on this. And so, a lot of what we do is convene groups. We actually threw a workshop in September where we had people from—sort of the strangest workshop anyone's ever thrown—from positional chemistry, from biosecurity, from this deliberative democracy team, from probably secure software. And we thought about: what is this world that we're going to? We had people from different policy orgs and different government orgs also there as we're thinking about this world that we're heading towards.

[00:17:29] Beatrice: That's really cool. Yeah.

[00:17:30] Anastasia: It was so much fun. I learned a ton.

[00:17:33] Beatrice: And this was for the epistemic point only, or—

[00:17:36] Anastasia: No. So, it was cross-disciplinary for everything, and then epistemics is one of the groups that was there. And I think that one of the things that Foresight does, and one of the things that ARIA does that I really admire, is cross-disciplinary conversations. I actually think that they are really inspirational for people and really help people figure out how their projects should interact with the world.

So, we had leaders from the areas that we are most interested in all together. Another area that we're thinking about is provably safe software or formally verified software. One of our projects is called Lean, which is an open-source software language for formal math. It allows you to do the type of math that involves proofs—higher math—on a computer, and then would allow a computer to automate those proofs. And so, we're thinking about what are extensions of this? And one would be: could you build software that you knew did what you intended it to do?

So, one of the things that's hard is that you can right now prove software against a spec. But how do you know if your spec is actually what you wanted it to accomplish? How do you know that your spec is written correctly? So, we're working on a project called Oath right now that would be designing that system.

[00:19:08] Beatrice: And do you have any examples of maybe a project that you decided not to pursue? Like, just to understand the decision-making process a bit—is there something that you thought, "Maybe we were considering this as a Focused Research Organization but decided not to because..."? Yeah, I don't know.

[00:19:29] Anastasia: I think I could probably give two or three separate cases where this is true.

[00:19:33] Beatrice: Yeah.

[00:19:34] Anastasia: So one: we saw a pretty cool biology project, but it was what I thought of as a biosecurity risk. And so, it was like there was high upside—I think this project would've been really beneficial in many ways—but I think it had a pretty significant biosecurity risk. And I think that we try very hard to build defense-dominant technologies—things that don't have dual-use potential, things that can't cause serious harm. That is an example of a type of project we don't do.

Another project that we don't do is people have brought us projects and we're like, "That's a startup. You should go raise for-profit funding for that." If you can raise for-profit funding for this, if you have users, if you have customers, you should do that instead.

And then we get a lot of... okay, there are two other ones. We get a lot of projects that come to us in the form of, "I would like more money for my lab." And I love academic research, and I think that curiosity-based, investigator-based research is really important, but it's not what we do. I think that rather than just being "my lab is focused on this area," actually, the goal of a Focused Research Organization is a really clear product, a clear dataset, a tool, a platform that exists on the other side of it—not just more research in a single area. And so, I think that's like a big difference there.

And then the last one is that I think that we're going to see a lot of advances from AI over the next couple of decades. And so, we're trying to pick projects that are not easily obsolete. I think that there are interesting things to do in software right now, like software for science, and it's just unclear to me that AI won't be able to do them in five years. And so, we're looking for projects that aren't going to be easily done by AI instead, and also that will potentially enable AI or enable AI-based science. So, I think building the right dataset is a really big one here.

[00:21:54] Beatrice: Yeah. Do you have any ideas or projects that are a little bit the opposite, where you think, "Actually, we don't think it's worth starting because we expect AI to help us be able to solve it much faster in five years"? Maybe there's nothing like that.

[00:22:14] Anastasia: Yeah. No, I think that there are. I think that actually, I would think of it even more as: if they're starting, we should start it as, "How do we leverage AI there?" I think that there's a project out there called "The Great Refactor." Have you heard of this?

[00:22:32] Beatrice: No.

[00:22:33] Anastasia: Oh gosh, I'm going to speak out of turn now because I am not a scientist—specifically, really not a software person. But there is a bunch of code in the world that is written in C, which is a memory-insecure language.

[00:22:50] Beatrice: Okay.

[00:22:50] Anastasia: It's not very safe. A lot of our infrastructure is built in this. You could move it into a more secure language. So, it's like a big code refactor for basically everything we do. And so, it's like: how do you build the tooling to make that happen, and safely? And I think that figuring out how you leverage AI for projects like that is really important.

Another big one is that, right now, all climate models are done basically in Fortran on super-high-capacity supercomputers, right? They're done in the same way that they have been done for decades. They don't run on GPUs; they don't run in Python. And so, how do you refactor all of these models to instead be able to run on the increased capacity from GPUs and in better software? I think that across science, we have this problem where grad students—we all love grad students—but grad students have done a lot of the coding, and then they have left. And so, the software that powers a lot of our best science is not maybe the best quality software you would expect from a software company.

[00:24:10] Beatrice: Yeah. Yeah, that's a more interesting point. Even like, how do you leverage AI in your projects? Do you have that with you across all of the projects these days? Does it relate to all the projects yet? AI?

[00:24:25] Anastasia: Yeah, I think it does. I think that there's an extent to which any of our existing projects were already in this framing. E11 Bio, I think—if you are thinking about uploading, if you are thinking about creating a digital twin of a brain—all of these things are dependent on the discoveries that E11 Bio makes. Like, you will need AI to do any of that, but you then need the fundamental truth from E11 Bio also. And so I think a lot of our projects are of this sense, where the truth that we are discovering could be paired well with AI to get to the outcomes we want in the world.

[00:25:12] Beatrice: Yeah. Do you remember from your sort of world-building workshop that you did? We touched on the epistemic point. Were there any other—we touched on the coding—any other sort of fundamental capabilities to specifically prepare us for the Intelligence Age that came up?

[00:25:35] Anastasia: Yeah, so I think that... there's this category we called "biological and ecological dark matter." Which is, again, that... so, I expect AI to be able to rapidly advance science. I think it is a fair expectation that AI could rapidly advance science, but it still has to—whatever systems they are—still have to see and interact with the world.

[00:26:08] Beatrice: Yeah.

[00:26:09] Anastasia: And so right now, we have a very limited understanding of how even human biology works. We don't understand how all of the very complex immune system works. We don't understand how your metabolism works. We don't understand all of the proteins that exist in your body. And so, how we expect an AI agent or an AI system to go and suddenly "solve cancer" is really difficult. And so, how do you build the tools and the datasets that a super-intelligent AI system would need to do that research? It is a lot of what we're thinking: how do you create the visibility there? So, is this better microscopes? Is this better sensing? Is this bigger datasets that you can train it on or you can use as reinforcement learning containers for—

[00:27:05] Beatrice: Are there... that's a really interesting question. I haven't heard about that. What do you mean by biological dark matter?

[00:27:11] Anastasia: Yeah. "Biological dark matter" is what we use internally. Biological or ecological dark matter.

[00:27:15] Beatrice: Yeah.

[00:27:15] Anastasia: I guess this is maybe where I and my friends who work at the frontier labs disagree the most. They're like, "Eh, it's just going to do it." And I was like, "We literally can't see how anything works right now." We do not have a synaptic-level map of the brain. We currently have very little idea of what glial cells do in the brain, right? We don't know how all of the ion-gated pathways work between neurons.

And so, if we can't see that and we can't show it to an AI system, how that AI system is supposed to learn and work with it is really unclear. So, how do we find—and I think that one of the problems with biology is that there's just like infinite things that one could look at, right? It was just like a combinatorial, insane problem. A lot of what we're doing is trying to figure out: what are the right datasets? What are, actually, the tools and the capabilities that would make us capable of doing this kind of research?

[00:28:22] Beatrice: Yeah. Thank you so much. I really do appreciate the groundwork for all the rest of us.

[00:28:28] Anastasia: It's fun. And I think that one of the things that's interesting about scientists is that they're not incentivized to ask themselves this question. Most scientists are incentivized to ask themselves: "How do I get my next grant? How do I hire a grad student? How do I get my next position? How do I get a postdoc?" Most companies are incentivized to say: "How do I make money? Where is the therapeutic on the other side of this?" You can look at companies like Laura Deming's company, Until, or you could look at things like Max Hodak's company, Science Corp, and they're like really far-reaching, really hard companies.

And even there, I understand one day how users will interact with this. No one's going to interact with making single-cell proteomics a hundred times cheaper. There's like a handful of scientists who will; no end-users will. And so companies aren't incentivized, also, to think about these fundamental capabilities.

[00:29:32] Beatrice: Yeah. It is very much appreciated and very needed. I love it. And I love the big vision, basically. Do you think about that... when you create a new FRO, would it ever be like—are they all kind of moonshots, or are you pretty certain you're going to achieve what you set out within the five-year range?

[00:29:58] Anastasia: There's often a world... we don't always say this publicly, but sometimes they're very "boring" projects. So we have this high-throughput drug screening project. I am not calling you guys boring, I promise! They're doing an amazing job where they're taking a bunch of human druggable targets—so GPCRs—and they're taking... there's like a wide variety of ways in which drugs can interact with the human body. So, gene targets and then like all of the FDA-approved drugs. Or what if they took a bunch of Chinese peptides? Or what if they took microplastics? Or what if they took things from the environment and they screened them against those targets and they open-sourced that dataset?

In some ways, this is just like really basic work, right? You develop an assay; you run things through that assay. In a high-throughput screening, you develop another assay; you run things through that assay. There's some scientific risk, but there isn't a huge amount of scientific risk. There's some engineering risk, but there isn't a huge amount of engineering risk. But I know we can get to the end of it, and I think that actually what it'll show will be incredibly impactful. It'll change the way we think about drug discovery, or change the way that we think about how we interact with all of the new materials that we're putting into the world. And so, in some ways, the project itself doesn't feel as moonshotty, but the impacts of the project are definitely like a moonshot impact.

[00:31:34] Beatrice: Yeah. So it's pretty... you know what you need to do when you start a Focused Research Organization. Yeah.

[00:31:42] Anastasia: I think that one of the biggest differences between an FRO and a DARPA program is: a DARPA program will be like, "I know I need to go that way. I would like to be able to sense this thing. I would like to be able to treat patients in the field," but I don't know the best way to do it. So I'm going to take six or seven or ten bets and kill the performers that aren't working and work in that direction. Versus an FRO, where it's: "No, I need to build this product. This is the roadmap to get me there, and I'm pretty sure I can get there." There's always some scientific risk—even in the assay development, not all the assays work, and some of them have more. E11 has a lot of scientific risk. But it's still very clear what they're building at the end and how people would use that product. Yeah, which is different than how a DARPA program would run—very different.

[00:32:39] Beatrice: Yeah, yeah. Are there any more capabilities that we haven't touched on, especially in relation to the Intelligence Age, that you want to mention before we go to the next?

[00:32:56] Anastasia: I think that neuroscience is going to continue to be incredibly important. I think that there's a philosophical reason, which is that if we are creating intelligence, we had better understand how our intelligence works. If it is true that we're creating a super-intelligence—which I am not convinced of, but there's some possibility that we would create super-intelligence—we better know what intelligence is and how our brains get there and how we run those same systems. Even, actually, if we improve our capabilities, can we figure out how to solve depression? Can we figure out how to tune our moods? What are the capabilities that we can unlock by using AI? And so, I think that neuroscience continues to be a priority for us.

[00:33:52] Beatrice: You've had so far two FROs in neuroscience.

[00:33:55] Anastasia: Yes. We have two US-based: we have Forest Neurotech and E11 Bio.

[00:33:59] Beatrice: Yeah.

[00:33:59] Anastasia: We're looking at a handful of other, both US and UK-based, neuroscience FROs.

[00:34:06] Beatrice: And can you tease what their focus would be?

[00:34:10] Anastasia: Trying to figure out if I should say this. Yeah.

[00:34:13] Beatrice: It's not mandatory.

[00:34:15] Anastasia: Well, the UK one—and we should double-check before we release this episode—is building in vivo videos of connectomics. So, like movies of how synapses are firing, and then over time, how synapses change when you learn. And so it's actually, again, a little bit like E11 Bio. The videos are important, and we'll hopefully look immediately at treatment—sorry—at postpartum depression and maybe even make discoveries pretty quickly. But actually, the thing that we're working on is: how do you build the tooling to let you take these videos over the course of months as brains change? So we understand how circuits rewire and how those systems develop.

[00:35:04] Beatrice: Very interesting. We'll check on if they keep it, but very interesting. And also nice that you're... is that your first organization that wouldn't be in the US, or—

[00:35:15] Anastasia: So there's already one non-Convergent, non-US FRO, which is Gabi Heller's FRO, BIND. And it's working on intrinsically disordered proteins, and she's amazing, and their work is amazing. That was funded by Research Ventures Catalyst in the UK. We are working right now with ARIA to... we ran a call for proposals, then we ran a residency, and we're going to launch one or more FROs early next year in the UK.

[00:35:46] Beatrice: That's so exciting. So the model is catching on.

[00:35:49] Anastasia: Yeah, I think that people... Ilan Gur, as I mentioned, runs ARIA and is kind of, I think, who really inspired me. But I think that people both in governments and in science are attracted to the idea that you can build a tool and you can know what you're buying, to some extent. You want this capacity. So, rather than having a bunch of people work towards it, or rather than funding some labs to maybe build that exact capacity, I think it appeals both to federal funders and to philanthropists.

[00:36:25] Beatrice: Yeah. So moving away a little bit to more larger-scale philosophical framing: I was doing research for this episode, finding some work from you where you were talking about how we need to think more about "steering" rather than just accelerating technological progress. How could you maybe expand a bit on that or explain what you mean by it?

[00:36:54] Anastasia: Yeah. I think that there is a notion that technology is like in one direction: you go faster, or you don't go at all. But I think that we can pick selectively beneficial technologies. We can choose what we want to accelerate. So this has been related very much so to this biosecurity question. You can pick technologies that have dual uses—they could have positive outcomes, they can have negative outcomes—or you can select for defense-dominant technologies, things that work primarily on the defense side.

I spend a lot of time... I'm on the board of Blueprint Biosecurity, and we've spent a fair amount of time roadmapping in the biosecurity space. So, if you look at things like advanced PPE, right? Ways to prevent you or others from coming in contact with—

[00:37:53] Beatrice: PPE is like personal protective equipment?

[00:37:55] Anastasia: Personal protective equipment. This is like a defense-dominant technology. There's not a dual use for it; there's not a negative outcome from it. So you can think about things also in the built environment. So, if you were to put far-UVC light... so, we already use UV light to sterilize water; we've done that for decades. If you go to your gym and you put your water bottle in, there's a little tiny UV light in there that is sterilizing that water as it comes out.

There is a band of UV that's not dangerous to humans: far-UVC. If we put that in this room, in conference centers and hotels and hospitals, could we reduce transmission rates? Or there is something called glycol, which is actually... if you go to a rave, you're very safe from a biosecurity event! (I'm joking.) But it is in fog machines already. This tiny molecule that, in far lower concentrations than would be in a fog machine, can actually really disrupt viruses from being transmitted. And then the last one is air turnover. So, places that have higher air turnover have a lower viral load in the atmosphere.

So, if you were to push forward these three mechanisms and develop technologies in these three spaces, it is like a defense-dominant scenario. And so, I think that these are things that we want more of in the world, especially as potentially AI is increasing risks from biosecurity threats.

[00:39:28] Beatrice: Yeah.

[00:39:28] Anastasia: And so we're thinking about that across bio, across hardware, across software, across science. What are the things we can do that accelerate the features we want and then don't accelerate negative futures?

[00:39:43] Beatrice: Yeah. I think that's a great framing. It relates to a lot of things, like the d/acc framing of Vitalik and stuff like that.

[00:39:53] Anastasia: Yeah. It's very, very inspired by Vitalik’s d/acc.

[00:39:56] Beatrice: Yeah.

[00:39:56] Anastasia: Yeah.

[00:39:56] Beatrice: I also, yeah, you mentioned Blueprint Biosecurity. I think I heard Jacob Swett say at some point that we will think about the air quality that we have now—we will think of it like the Middle Ages, how we think about general cleanliness on the streets and things like that. Because there's so much more we can do in terms of... we're at a conference right now. When you start to think about all... I guess especially after COVID, when we all became very aware of these things.

[00:40:32] Anastasia: Yeah. And I think that one of the things that's really hard is this is a classic collective action or public goods problem, where no one really wants to fund this work because we all have to live in this world. The downside of a next pandemic or a biosecurity event is like a theoretical downside, and it's hard to rally resources to prevent those types of things. And it's not something that anybody feels that they should be funding. And so I think that it has been harder to get support behind some of these really clear biosecurity efforts.

[00:41:14] Beatrice: Yeah. Yeah. I think, like, back to the steering point, just the hour before you came in, I interviewed Jason Crawford—

[00:41:22] Anastasia: Oh, cool.

[00:41:22] Beatrice: —who just wrote his Techno-Humanist Manifesto. And in there, he talks a lot about how his vision for what we should be aiming to do is to increase human agency. And that's what he sees the progress movement's purpose as being. And I like that framing in relation to steering, because in one sense, just thinking of it as, "Oh no, we can only accelerate and there's only one way to do it," seems to not account for the agency that we have in steering this.

[00:41:59] Anastasia: If you walk around an academic conference—I don't know how many of you have gone to academic conferences—you'll go to this poster session and you will see tens or hundreds or thousands of grad students and postdocs with a poster, and it represents their work. If you ask them why they are doing that work, they'll always have an answer.

If you ask them why they're doing that work again, they may have a less clear answer. Like, "Why is this the thing you've dedicated your life to? Is this the most important thing you could be working on? Is this the highest-impact thing you could be doing?" I think most of them won't answer "yes" to that last question. I think one of the things that Foresight has done, that Convergent is doing, and that Renaissance Philanthropy is doing, is trying to raise the level of ambition of scientists—of saying, "No, actually, you can change the world. You can build organizations that change the world. You can build technologies that make the world better." How do you identify those opportunities and go after that? This is like really important.

[00:43:03] Beatrice: Do you have any recommendations? Where can they find the best information so that they can make the most informed decisions on what to work on?

[00:43:17] Anastasia: I don't know if there's a single place I would say. Talking to people outside of your field is a really big one. Yeah. I think that talking to people in other fields, talking to non-academics, talking to policymakers... I think that there is this really wonderful community of people out there trying to make a difference and sharing information. Foresight is part of that. I think that the Institute for Progress is part of that. FAI is part of that. There are all of these orgs who are looking at scientific priorities. So, specifically talking to scientists, I think that getting into some of those circles and asking questions and talking outside of specific fields is a big part of it. Yeah. I think that it's very easy to get caught up in the world that you're in.

[00:44:02] Beatrice: You guys also, I think on your website, you have some resources and you created the Gap Map, right?

[00:44:09] Anastasia: We created something called the Gap Map, which is an artifact of all of the conversations that Adam and Mary and other people on our team have had with scientists, identifying what the biggest gaps in the R&D ecosystem are right now and what technologies should exist.

[00:44:29] Beatrice: That must have been a big project, identifying all these gaps.

[00:44:35] Anastasia: Yeah, it was. To some extent, it was actually just like everything Adam has done for the last decade, and then a big project of taking his brain—

[00:44:43] Beatrice: —his brain—

[00:44:44] Anastasia: —yeah, trying to figure out how to take all of those notes and synthesize them into this Gap Map. I think that people often ask us how we built it or how we build roadmaps. It's complicated. I don't know if there is a good formula for it. I think that you can get to a field and then there are some exercises you can do to roadmap within that field. But in this really multidisciplinary, big way, a lot of it is talking to experts and leaning on what they know and what they see, and asking them what they think the biggest bottlenecks in their field are. What's blocking their progress? Why is nobody working on this? Who else could work on it? And doing some sort of mass interview process and then being really generous with your time and with connections to continue to explore that network.

[00:45:33] Beatrice: So, in your preparations, there's no one recipe for all in how you've done the roadmaps?

[00:45:40] Anastasia: If I were to pick two things that are really common: you have to work multidisciplinarily. Both because I think that it changes the way you think, and you think bigger because of it. Every time we've done a roadmap where we do these roadmapping workshops where we get people together about a field and talk about it—if you do it with a bunch of experts in the field, you just have the same conference that is happening all the time anyway. Everyone talks about their papers, or they don't talk about their papers because they don't want to get scooped. But if you bring in that "weirdo blogger," and if you bring in somebody from an adjacent field, and you bring in practitioners and users, and you put all of these people who don't belong together in the same room, really magical things come out of it. And so that's a big one for me: being multidisciplinary and talking to a lot of people.

[00:46:34] Beatrice: Cool. Yeah. I mean, that very much fits what Foresight is doing and has been trying to do for a long time. Is there anything else before we wrap up that you think we've missed talking about regarding the work that you guys are doing?

[00:46:51] Anastasia: I think that we're at a really interesting moment where there's a bunch of new philanthropy and people are trying to figure out how to use that most effectively. And the government is looking at different ways to fund science, and there's a bunch of talk about metascience. I think that there's an opportunity right now to actually... Convergent Research was one of a handful of orgs that came up five years ago who started to make these changes. But I think that we can either say, "Okay, we've run an experiment, let's see how it goes," or really push forward a systematic change with support from philanthropists and policymakers. And I think that all of these groups—scientists, philanthropists, policymakers, metascience people—working together is really important for the changes we want to see in creating that positive future over the next few years.

[00:47:50] Beatrice: Yeah. Yeah. We had a whole track now on philanthropic funding and how we can use it most efficiently—new models and everything. I guess check out the Vision Weekend recordings; they will be on YouTube. Perfect. I think then I just want to ask you one final question: when you think about the future, what gives you existential hope for it now?

[00:48:17] Anastasia: I think it's communities. I am fortunate to interact with people—scientists and policymakers and people here and philanthropists—who are all genuinely trying to build that better future. And there are so many people leaning their grit and their dollars and their time and their effort behind that, that it gives me a lot of hope for humanity and the world that we're building.

[00:48:47] Beatrice: I agree. I think that's in general something that brings me a lot of hope, and I think it's pretty universal—if you find a community that you align with, it's very catalyzing in many ways. That's it. Thank you so much, Anastasia.

[00:49:06] Anastasia: Thank you.

Read

RECOMMENDED READING

Tools and concepts

  • The Gap Map: A resource by Convergent Research identifying important scientific bottlenecks that are currently neglected by academia and industry.
  • Worldbuilding and backcasting: A set of tools to clearly imagine a future we want to live in, and figure out the steps to actually build it.
  • The Great Refactor: A project aimed at using AI to rewrite widely used but insecure infrastructure code (C/C++) into safer languages like Rust.
  • The Techno-Humanist Manifesto: A book by Jason Crawford (Roots of Progress) that lays out a hopeful, human-centered philosophy of technological progress. We discussed it in our previous podcast episode.
  • d/acc (Decentralized, Defensive Acceleration): A framework proposed by Vitalik Buterin, focused on speeding up technologies that favor defense over offense (such as biosecurity or cybersecurity).

To learn more about the technologies mentioned by Anastasia

Organizations

  • Convergent Research: The incubator co-founded by Anastasia Gamick and Adam Marblestone that launches and supports Focused Research Organizations (FROs).
  • Foresight Institute: Our parent organization and host of Vision Weekend, focusing on high-impact technologies like nanotechnology, neurotech, and biotech.
  • Renaissance Philanthropy: An organization founded by Tom Kalil aimed at increasing the ambition of scientists and philanthropists.
  • Blueprint Biosecurity: A nonprofit focused on "defense-dominant" biosecurity technologies (Anastasia serves on the board).
  • Advanced Research and Invention Agency (ARIA): The UK’s "DARPA-style" agency, led by Ilan Gur, which collaborates with Convergent on UK-based FROs.
  • Institute for Progress (IFP): A DC-based think tank focused on innovation policy and metascience.
  • Neuralink: Elon Musk’s company pioneering brain interfaces, mentioned regarding Anastasia’s background in early-stage startups.
  • Science Corp: Max Hodak’s company, used as an example of a "hard tech" venture.
  • Until: Laura Deming’s company, mentioned in the context of visionary scientific leadership.

Focused Research Organizations (FROs)

  • E11 Bio: Focused on high-throughput connectomics and building synaptic-level maps of the brain to understand how consciousness arises and treat neurodegenerative diseases.
  • Cultivarium: Working on non-model organisms to unlock the biological potential of microbes that are currently difficult to grow or engineer in lab settings.
  • Lean FRO: An open-source software project developing a language for formal mathematics and "formally verified software" that can automate proofs.
  • Forest Neurotech: A neuroscience-focused FRO developing minimally invasive tools for brain imaging and stimulation.
  • ‍BIND: A UK-based FRO led by Gabi Heller, focused on "intrinsically disordered proteins".