Eric Gilliam studies how organizations like Bell Labs, early MIT, and the Rockefeller Foundation helped drive scientific progress — and what made them unusually effective.
In this conversation, we explore how those models worked, why many of them disappeared, and what it would take to bring them back. Eric explains why fast-moving, engineering-driven labs like BBN (which built the first nodes of the internet) may be essential to accelerating progress in fields like AI, biotech, and beyond.
We also cover:
Eric runs FreakTakes, a Substack focused on the organizational infrastructure of scientific progress. He’s a fellow at the Good Science Project and works with ARIA UK and Renaissance Philanthropy to support new models for R&D.
‍
Eric’s vision for the future centers on fostering a new era of scientific progress by recreating effective historical models of R&D, to focus on high-impact areas that directly enhance human well-being, such as improving housing, food production, longevity, and leisure time.
‍
Eric runs FreakTakes, a Substack focused on the organizational infrastructure of scientific progress. He’s a fellow at the Good Science Project and works with ARIA UK and Renaissance Philanthropy to support new models for R&D.
‍
Sora is an OpenAIÂ GenAI tool.
Eric’s vision for the future centers on fostering a new era of scientific progress by recreating effective historical models of R&D, to focus on high-impact areas that directly enhance human well-being, such as improving housing, food production, longevity, and leisure time.
‍
Host: Hi everyone. Welcome to Foresight Institute's Existential Hope podcast. I'm really delighted to have Eric Gilliam here with us today. Thank you so much for joining. You're quite a prolific writer, producing really interesting and deep dives, especially on how 20th-century R&D operated. We just published another podcast episode where we looked into the past to look into the future, and I think you're doing that at a level where it's quite detailed and useful actually. It's not just broad stroke conclusions; you really dig in there. You're also an associate at Fuld, and still a fellow at Good Science?
Eric Gilliam: Yes, both.
Host: Cool. Good Science is another really fantastic project that we are big fans of at Foresight. We've also touched on that in a previous podcast, so if people are interested in more on that, there's a lot more content available. We met, I think, literally through an XDM. We were just like, "Okay, let's have a call, let's say hi." Rather than beating around the bush and telling each other what our organizations are working on or what we're writing about, you immediately asked me about a very concrete problem we have and then proceeded for the remainder of the 25 minutes to try to solve it. That was one of the more unusual and productive calls I think I've ever had as an intro call. So, thanks for that. And no pressure on this conversation now, but let's try to get into some concrete, nitty-gritty stuff about how we could improve the operating systems of organizations. Perhaps even of societies, in a way where we can make progress happen in a useful way again, possibly. Okay, so maybe you can take us on a little ride. Perhaps you can describe where you got to, what you're writing on, what you're writing about today, and what you focus on day-to-day.
Eric Gilliam: Yeah. So, I came to be doing whatever it is I do right now quite randomly, it would seem to most people. During undergrad, I was a political science major at Stanford, and I did as much computer science as I could. I was very excited by all things R&D, computer science, and normal science. I wasn't very good at it, though. It was more just a group of people I admired a lot. I didn't really think about it beyond reading books here and there. My first job after college was with Steve Levitt, the Freakonomics economist at UChicago. He was running something of a "think-do tank." It wasn't really related to this in any capacity, but at one point, he came in with a science-related idea one day and said, "What would a Manhattan Project for X look like?" I didn't really know, so I randomly went and picked up Richard Rhodes' The Making of the Atomic Bomb. I was enthralled; I loved it. But I didn't know that my life, at least my professional life, would be different from that point. In reality, I read it and I said, "Oh, I've always had a good nonfiction reading habit." I read a lot of dense social science and applied micro papers before. But once I had a job, I said, "Oh, I kind of like nonfiction that reads like fiction." I don't really enjoy reading; I didn't grow up reading. It's a bit of a labor for me. So I like picking up practical details; that's what kind of pushes me to do it. For a year or two, I would just read biographies of these dead guys with a shocking amount of divorces. I would read their biographies, or the way I get curious is I would go look up their oral histories and say, "Oh, that's a weird interaction. I wonder if there's a sociology or economics literature on that?" It was purely a hobby until, at one point, it dawned on me. I lived in Chicago; I wasn't in the West Coast sphere as much, other than my undergrad friends. I thought it was cool that people were spinning up FROs or something called the Arc Institute was coming to exist, and I'd see some blogs or things on history relevant to them. I didn't think the blogs were very good, and that's just what happens sometimes in your area. Somebody says something and they write about your area in the New York Times, and you gripe to yourself, "Oh, I think they got that wrong." I did that to one of my Bay Area friends on the phone, just complaining, nothing productive. He said, "No, you don't understand that people like Patrick Collison read these blogs. Maybe you're wrong, and you're wrong for reasons you don't know, but throw your hat in the ring." So essentially, I said, "I'll write five pieces and I'll see where it goes," and here we are. When I started the Substack, called "Freak Takes," the goal was that I would like one new science organization or something of that sort to do one thing differently because of something I wrote. It felt so far, and it ended up going quite well. It turned out that the Institute for Progress people or people like Stewart Buck at the Good Science Project saw my stuff quite quickly, and they thought it was useful. I think within three months of starting the Substack, I was already a part-time fellow at Good Science Project. From there, it's just scaled. I don't view the goal of the Substack as getting Noah Smith or Matt Yglesias numbers of readers—those guys are great, I can't do what they do. Really, it was just, "I want to be essential to somebody like Adam Marblestone, who you've also had on the podcast, or Tom Kalil, who is now my employer at Renaissance Philanthropy." So it's very arbitrary, and I guess being really nerdy about one particular thing and not thinking a lot about professional stability helped. I just threw myself into it because I liked it, and it turned out history departments don't care about this area.
Host: Do you know if any organization did anything differently already because of what you wrote?
Eric Gilliam: Oh, I struggle to point at one. Not things I would want to claim credit for in public, though. But there have definitely been some small things where I'm like, "I've expanded the ambition," but there are definitely minor things, and only history will tell if they turn out to be not minor. There have been quite a few minor things, and in the beginning, all I ever hoped for was minor things. Things like, "Hey, you're an ambitious applied R&D operation. It seems to me like Bell Labs systems engineers were the best systematized way to make sure you're triaging the right questions and you know you're not wasting money." It's something where there's a market that VCs wouldn't want to touch. When you talk to organizations like this early on, they'll often be like, "Oh, you're right. That's just a very good idea, and we're just going to put this responsibility onto a person we have on the team." But I wouldn't want to take credit for any of them, because often, if they're really late, you can't really convince them. If they're really early, they're having all sorts of discovery calls, and you could say, "Oh, maybe somebody else had that idea." I'm mainly just excited that people are there to be convinced and that they find history useful and applied.
Host: Yeah. Could you share a little bit more information about the specific Bell Labs example? I know that's one you've really taken apart.
Eric Gilliam: Oh yeah. So, where the Bell Labs piece came from is, every so often I'll write a piece and say, "I think people will use this; it'll be great." But in reality, I view the whole "Freak Takes" as an applied research shop. So I try to get out there and talk to people who are building new science organizations to just understand where they are open to being convinced. One of the organizations that was spinning up adjacent to Convergent Research or under their umbrella—I forget precisely—essentially they said, "It's our sense that Bell Labs used to do X, and that they would show people this booklet about how the telephone system worked." That's how you get what they call a "target-rich environment," which seems popular on the West Coast. I read a modest amount of Bell Labs history, and I had maybe seen the book mentioned one time or something; it just didn't really come up. So on the call, I said, "You're there to be convinced. If I wrote something up for you in the next month, you would use it, incorporate it into how you set up your management structure?" It was early days, and they said yes, this would be very useful. It turns out, people consider Bell Labs a bastion of academic freedom, where you did whatever you wanted in a corporate setting. That's not precisely true. A lot of organizations, like Bell Labs or early GE Research, exercised this principle in management, which I call "long leash within a narrow fence." They want to give you the room to work on something for three years, but they're going to try to find a way to bound you in some way.
So, at early GE Research, I'll do early GE Research and then talk about Bell Labs. At early GE Research, when they hired somebody like Irving Langmuir, who ended up winning the Nobel Prize in surface chemistry, they brought him in from the Stevens Institute. He was a pretty applied guy, but a basic researcher, and they said, "You could work on anything you want," which is what people think places like Bell Labs did, conditional on it solving a problem that's going on over there on the applied side of the lab, on the engineering side of the lab. So when somebody like Langmuir goes over there, you find all sorts of problems. For example, "Hey, your bulb is lit right now," and they're like, "Yeah." Then he names the circumstances and why the literature says it shouldn't be able to stay lit in this environment. The engineers put their thumbs up and they go, "It's lit. We don't really know what to tell you." That's the kind of thing where he goes, "Oh, there's a hole in the literature." Langmuir, on some of his more extended investigations, might spend three years just conducting very thorough courses of experiment, like putting different gases into the bulbs or different vacuum environments, to just understand how they're working and build up a theory.
Bell Labs did a good job of professionalizing this. They said, "We're a much bigger operation; we're huge. We're not going to have you all walk to the other side of the lab; this is much too big." "What we have is about 10% of our headcount, or something like that for—forgive me if I'm wrong, it'll be correct on my Substack—about 10% of our headcount who are good physicists, good engineers, good chemists, whatever, and maybe not top-tier. And also maybe could have been consultants in another life, or the kind of people who are founders today; they're very practical." These people would tour Bell's manufacturing operations or pore over Bell's budgets to understand where they're having problems and the market sizes of those problems. One of the most profitable problems that Bell Labs ever uncovered came from one of these systems engineers, whose job it was to know that side of the practical operation and also walk around Bell Labs and the basic researchers and know who's doing what, and who's interested in what. One problem they brought to the metallurgists one day essentially said, "Oh, we have these poles that degrade at this rate in this environment, and it's something like an order of a hundred million dollar a year repair budget for us. Anything the metallurgists can rig up to solve this problem, this is a billion-dollar problem for the company over a 10 or 20-year period."
Once you know it's a billion dollars, all of a sudden it gets quite easy for Bell Telephone on paper to say, "Yeah, we have, I don't know how many people are on it, three people, eight people, 15 people, all focused on rigging up this one material to do this one thing and really doing fundamental research investigations to look into it, just because it's such a big problem, and you have the freedom to do that and know you're doing a good job when you've scoped problems accordingly. Knowing the market size and all of that." So to me, if I were a new science organization, a Bell Labs-flavored systems engineer has a place in almost all of them. If you say, "Oh, our research doesn't need to be useful for 20 years," that's fantastic; in that case, the markets that are interesting to your systems engineer really expand. But it's very useful, I think, to a lot of the history of American applied R&D. It does use people like systems engineers or market signals in some way to dictate the work.
Host: But doesn't it seem even more impactful to know where to draw the fence around before you do the long leash? Like, basically, isn't the type of problem, or the types of problem area that you allow someone to solve within, super impactful for the impact of the solution you'll get? So, arguably, you want a lot of thought to be put into what problem to even have these people focus on if they're actually spending a good chunk of the budget. Or what problem area, what fenced-off area, do you allow them to have a long leash on, rather than just saying... Are you saying it's bad to have the systems engineer drawing the fence because you're artificially limiting things, or what precisely are you saying?
Eric Gilliam: Sorry. Okay. I thought that, basically, Bell Labs was giving them a fence, an area, basically. They couldn't just pick any problem at random; they had to pick a problem within a specific area. Then they were able to totally have a long leash on what they wanted to pick and how to solve it. But arguably, isn't the meta-problem to solve, and what to even focus on, in terms of where to draw the fence around, also a large problem? I guess something to remember is the systems engineer was not necessarily dictating to them top-down what they could work on. Probably about half their time, I'd have to look into the numbers precisely, they spent half their time or brainpower going and walking Bell Labs itself to know what people are working on. The way a Bell Labs budget would work would be different than a lot of corporate budgets today, where you can't run a single experiment without the company signing off. They were pretty loose with budgets below a certain point. So we had a lot of people fiddling on really cheap crackpot projects or ideas, or taking stuff home to work on it. This is just the nature of engineering firms in 1950, or something like Draper Lab, where even though they were doing very classified stuff, the lab had this almost formal process of "we don't ask questions when the equipment disappears for four days at a time" because that's where the new exciting stuff comes from. So I think, given the systems engineer is going and talking with those guys all the time—who a lot of times are technically more brilliant than the systems engineer might be, and the systems engineer might be a chemist themselves—the whole thing is an ongoing process. If somebody is really convinced something is good on the Bell Labs end, you could imagine them also pushing the systems engineer to look into the practicalities to find ways to make it profitable. But it's not a perfect system, to be very clear.
One of the side projects that Bell Labs was not working on was Claude Shannon doing his information theory stuff in, I believe it would have been, his Manhattan apartment. It was a really great theoretical problem that came out of the applied work he did day-to-day at Bell Labs, but they missed that one. I think, maybe I'm wrong, but I'm pretty sure they missed that one, and they probably felt a little guilty about it. So when people talk about Claude Shannon doing whatever he wanted at Bell Labs, that was after he figured the information theory stuff out, and they gave him borderline emeritus status because it was worth so much to the firm. They weren't really properly funding him to work on it. So you do miss. If you're going to say, "It seems like you're going to miss maybe the most exciting thing that could come out of the labs with this process," I think you would be right in that. But if somebody wanted to counter you, they could also say, "It seems like Claude Shannon had two massive discoveries in his life." There was his master's thesis, which he figured out while working as a very practical person on Vannevar Bush's differential analyzer project, and that's where we get Boolean algebra applied to computers. Then, maybe Bell was underemploying him on this stuff, but that's when we got information theory out of him. The second he gets Bell Labs emeritus status or leaves and does a lot of juggling and unicycling at MIT, we're not getting the same practical outputs out of him. What he spends a lot of time on—Jimmy Soni's book is actually very great on talking about the frittering parts of Claude Shannon's life—is he seems like he's still quite brilliant, but he's figuring out ways to mathematically describe random things. He's up to playing cards and things of that sort. So I think the Claude Shannon story is a really complicated one, but it's very interesting because I think a lot of people who disagree with each other on applied and basic research have a lot of things where they can feel right. But they have to acknowledge things wouldn't have happened with their pure applied or pure basic system.
Host: Super interesting. Do you have any other examples, apart from Bell Labs, that you like to draw people's attention to? Like, basically, or I guess even on a more meta level, when you talk to other organizations, do you usually have a go-to list of, "Here's a few really interesting historic examples that probably most research organizations or most organizations could learn from?" Or is it more, you talk to an organization, you figure out what their problem is, and then you're like, "Where in the history of scientific innovation could I find something that would be useful here?" Are there a few blanket solutions that just generally tend to work across the board because most organizations fail in the same ways? Or is it super specific?
Eric Gilliam: If there's any way my new job relates to doing "Freak Takes" or stuff I do with Renaissance Philanthropy and Good Science Project and my first job at UChicago, it's that I feel Steve Levitt, the Freakonomics economist who I worked with, would always say all sorts of stuff offhand. He'd say them as generalizations, really. He meant them with caveats. But you're like, "Ah, you just say the generalized thing sometimes." But one of them that he would say is, "You could never convince anybody of anything," or "People only want to work on their own ideas." That's not always true, but as you get older, you're like, "Oh, it's truer than I would've anticipated it to be." So I think I try to... the reason I was willing to start the Substack is I thought people who set up these organizations often already adore places like early MIT, or Bell Labs, or GE Research, and my job can be to sell them on it even more and then unpack it to make it good, practical, and useful for them.
So I guess when I show up to these calls, I really want to know what their inspirations are and what excites them. Then I say, "Okay, this is interesting. Let's latch onto that." Sometimes I think maybe they're a bit mistaken on the history of something, and if they say they like X, Y, Z, and therefore they like this organization, it's "Oh, maybe you're more Edison's research laboratory than GE Research," or something of that sort. But I really try to let them take the lead. A, for the behavioral aspect: they already decided what their inspirations are, so why would I fight them? And B, there's usually a touch of genius in these people, and they're very technically competent, and I don't really have those things. I view myself as providing a service to people who are the geniuses of our time. People like Adam Marblestone or something like that. So I don't think it's for me to say, "No, that shouldn't be your, what excites you, your muse, the thing you want to model yourself after." I say, "Okay, I'll take what you're saying as gospel. Conditional on that, there's some stuff that successful near-neighbor organizations from history kept in mind that I think you should keep in mind. What's the best way to do this?" Sometimes I'll just send them a piece I've already written. Other times I'll say, "Let me write you a one-pager," and it'll distill five pieces of stuff into something very practical. I guess to me, this is quite a dream gig. In the very beginning, I thought, "Oh, maybe I'll get hired as an ops person at the Arc Institute, and once a year when they have some random question like this, they'll let me be the person who handles it." So the fact that I get to do it all the time is great.
Host: I'm sure that any organization that you interface with feels very similarly, because who doesn't want someone who deeply examines this and tells you, "Hey, here's this totally obvious thing that you could be doing better," unless heads must roll or there's someone really on the other side of it? It's just a really awesome thing to know.
Eric Gilliam: And that's why talking to organizations that haven't started executing yet is quite exciting. Because if they've already spent $10 million, the answer might be, "In the back of my head, I wouldn't have spent the money like that, but this person is very excellent. I maybe just don't see it, and I'm not good to advise on all sorts of this, and there's only some small thing." But in the very, very beginning, when the money's been allocated and no dollar has been spent, there's no more exciting time.
Host: Yeah, maybe to segue into something, just because I listened to a new economics episode this morning, which was on "sludge." It was mostly talking about sludge in a general way, in the sense that there's just a lot of bureaucratic sludge, which was defined there in opposition to "nudge," which makes it much easier to do the right decision or the decision that afterwards you would've liked to have taken, thanks to Kahneman and Co. But sludge is this phenomenon where over time, or from the get-go, there are a lot of bureaucratic hurdles you have to jump through. Processes that are extremely hard, super costly, and hidden. So that could be filling out really long forms, making it really hard to unsubscribe to a subscription model you have. Some of them are on purpose, not all of them are. I was really interested in that because you've also written or at least alluded to the fact that progress isn't necessarily slowing down only because ideas are getting harder to find, but it's also that these bureaucracies are possibly getting much more heavy and risk-averse. Could you talk to that dynamic a little bit? And do you think... maybe we just start there, and then I have a question for you about how that might develop in the future. But how do you think bureaucratic organizations slow down innovation today? And what are some quick fixes that you'd recommend if you are such an organization, for example?
Eric Gilliam: So I guess the theory you're getting at, which listeners can check out, is what was it? Essentially, I wrote a piece, "The Burden of Knowledge, The Scientific Slowdown: Math and Physics Divorce, The Burden of Knowledge and The Scientific Slowdown." Essentially, people who read scientific history largely don't think a lot of the economists of innovation are maybe a bit too confident on the burden of knowledge as the main driver of the scientific slowdown. Nobody's really arguing that it's harder to learn more facts than it is fewer. But what they would say is, there are a lot of scientists, early progress studies folks, etcetera, who would note that in mid-century physics or something of that sort, in the 1950s, the average citation in the Physical Review was apparently 18 months, and they were having to learn to cite letters and things of that sort. This is 10, 20, 30, 40, 50 years post a lot of these early physics discoveries, where a lot of people, if you were purely following the burden of knowledge hypothesis, would say, "Oh, it will only get harder." But it was their sense that somebody like Gerald Holton, who I wrote a lot about, I would consider him one of the founding fathers of progress studies. One of the people who helped found MIT's SDS. He was a physicist who became a historian of science, and he was a physicist in this very dynamic era. Something he would say—and I think the oral history evidence from people like Warren Weaver, Richard Feynman, etcetera, a lot of people who are greats for decades across that era, would agree—is, "Oh, we have this really good system of when we create new branches, the really young people are incentivized to hop on them at 24 and attempt to make their career."
In the same way that in the Ben Jones burden of knowledge paper, they would look at how bad our productivity per dollar or per research head is getting in something like corn per acre or in transistor density or things of that sort. He shows a similar graph, and it's particle accelerator speed, power—I forget which. Essentially, it's a log-y axis, and it's just a straight line, so it's going up exponentially. But then they have a separate graph where they break it down and they say, "What's happening?" It's all these new branches of technology that are exploding for five years and then curtailing. In the same way that the economists of innovation who look at things like transistors or corn will be like, "It's our sense, this is just how this works. This is the most easily measurable one." Holton would say, you could say his theory fits within the burden of knowledge hypothesis, which would say it is harder to learn more facts rather than fewer, but that's within branches. Scientific branches, new branch creation, and sustaining them as long as you can, and using them to create more branches, is what drives explosive scientific productivity growth. So people like Holton, when he gives advice in the 1960s, it would be, he says, "It's very clear," and to him, it's very clear, "the era of the department is over." Everything's going to work like the MIT RAD lab or something like that, where maybe you have core fields, but the research happens in integrated units, and you keep spinning up new ones as it goes. "We're going to be very eager to give funds to people who are looking to hop on new branches because that's where the exponential growth happens."
We've seen precisely the opposite, because post-1970s, 1980s, research has become a lot more bureaucratized in the US. I talk more about this in the piece: how the 1970s political movements, like things like Watergate and Vietnam, were a huge turning point, it seems, for the research ecosystem, just because people began to have default mistrust in the government and things like that. A lot of process just grows out of this mistrust, or "how should we be looking into" and all of those things. So a lot of things start to get more bureaucratized, and then they're a little more rigid. The idea of "the era of the department's over, we're just going to spin up new organizations. We're going to give money speculatively to 23-year-olds who thought your weird paper, that nobody understands what to make of." Instead of saying, "Oh, that's weird. There are four people at CMU who want to hop on it." I think this is also something where when people read history, they're like, "Oh, all five friends from grad school got Harvard jobs." That would never happen today. Something to keep in mind is in a lot of those stories, what's happening is, "Oh, all those people were in I.I. Rabi's lab, and he had a really interesting piece of early technology, and they put their whole career into saying, 'We think this is a huge deal.'" That's where a lot of these successes among people who are professionally quite close come from.
In terms of how to deal with bureaucracy, I don't have any special ideas on how to do something like come into the University of Chicago or come into MIT and say, "Hey, we have a lot of professors here that think things like FROs are fantastic. How do we make it so we could run those out of the... they're a bit ossified." I'm not a bureaucracy hacker, like somebody like Marina Nietzsche or something. So I guess in my work, what you'll notice is I do a lot of working with brand new organizations or organizations that are philanthropically run, and a few people have a really important say. For example, I've essentially ramped down my "Freak Takes" work from 80% of the time to, let's say, a third of my time, and I'm doing a lot of work at Renfield. Why am I doing that? Because ARIA is spinning up in the UK. I think they have very sensible procurement laws, and I had all this writing on early ARPA history saying, "Hey, I think it's great. Everybody wants to copy the early ARPA model, but let's be very clear, there are certain kinds of very ambitious and very applied contractors that could turn around useful technology for you." J.C.R. Licklider's team at BBN being one, the early CMU autonomous vehicle groups being another. I think, to me, those are the teams that really make ARPA shine and are around for a lot of... if there are a few things from early ARPA history, like autonomous vehicles and the ARPANET, that make everybody want to copy the model or use some version of it—that's different, like ARIA's not ARPA, but they're similar—and ARIA may not exist without early ARPA. I think you need to be willing to fund brand new contractors that are willing to do BBN-style work, where they're like, "We're going to do stuff on the cutting edge, but we're not going to..." People would leave places like the MIT Lincoln Laboratory or Rand because they were like, "Yeah, that's applied paper studies. We're engineers. We want to build stuff nobody's built before." Just fresh out of EE labs, or maybe even too cutting-edge for them, but we want to turn it around as practical prototypes for users. This is an idea I had on the Substack, and I was willing to go work with ARIA because it's, "Oh, it's brand new. They have procurement laws that make a lot of sense. They're willing to fund young, bright 26-year-olds who serve a core need that they have." So I'm sorry that I don't have good answers on how we fix them, because I tend to make career decisions based on who just doesn't have them and is fresh. I can avoid it because me sitting on the podcast saying, "Oh, I think they should just tear up that rule," is a bit ignorant of government processes and things of that sort. It's often more complicated, even if I wish it weren't.
Host: Okay, then let's go hyper-fresh. If you were handed $100 million now to start a private new organization from scratch, how would you organize it?
Eric Gilliam: Hmm. I think, okay, so in my day-to-day life, I try not to think about this because for the most part, I figure I help the people who this happens to. It's maybe good to not have too rigid an answer and to show up to everybody fresh and say, "Oh, they have a near neighbor from history or a few, and let's help them do that." But I guess, let's see, if I was going to do it...
I guess if I were to say, what are the themes the Substack talks a lot about that you'd be uniquely useful at doing? One of my most popular pieces is on Warren Weaver and the early Rockefeller Foundation, and how they more or less bootstrapped the field of molecular biology into existence. It was because Weaver came around and he said, "Oh wow, I'm an applied mathematician. I saw what my fields did, or I saw what... I'm an applied mathematician who works in physics. I saw what my field did for chemistry, and there was that era where everybody in Silicon Valley seems to understand the physicists and chemists winning each other's Nobel Prizes." He thought, "I think the same thing could happen for biology." In 1932, he was offered an interview at the Rockefeller Foundation Natural Sciences Division, and he didn't think he'd be good at the job. He just went there to tell them, "I think you guys should do this." What went into the pitch is not just, "I believe this area between physics and biology can be revolutionary, and the physics folks have come up with lots of tools and models of thinking that are really good at the scale of the heredity problem, specifically because biology in this period was an organism-level science. They had the microscope, but a lot of physicists had gotten really good at working on a certain scale, so they ended up thinking Warren Weaver was actually excellent." When they brought him on, he had the bravery to do what I'm not sure any large philanthropy I've seen in the modern era has done, which is their natural sciences division said, "It's great. We have a lot of fantastic applicants who come to us. How this department has worked before me is they've sat down, and people from zoology, electrical engineering, physics could come in and make a pitch, and we just say yes or no to the most exciting proposals." He said, "That's fantastic, but nobody's above specialization. The Rockefeller Company, which has more money than God and funds this foundation, is not above specialization themselves. I think we're essentially not using the money well by scattering it." The implication was, "We'll fund one area; it'll be super additive, and we're going to fund an area nobody else wants to fund." That is what you saw out of him. So he starts putting 80% of the budget in 1932, 1933. The field isn't even called molecular biology until 1937. Then around 1952-ish, when Watson and Crick make their discovery, they ramp all the money out of the fund, and they put it—because they said, "Everybody sees it now; the other funders will do it"—into the next thing. Why was he doing 80% and not 100%? There's random stuff they wanted to be helpful for the whole time. But also, when you look into it, you're like, "Oh, what's going on in those corners of the budget? It's helping Vannevar Bush digitalize his differential analyzer, or crop work." It's other big bets where if they run into technical dead ends, it's very clear he's probably going to ramp those down. He saw computing and he skipped it because he said the thing he changed to after the Watson and Crick discovery was "miracle rice." He'd been dabbling with a lot of that stuff, and he thought, given everything they were learning in crop genetics, or given everything they were learning in molecular biology, they could apply it to crop genetics and things. Arguably, creating molecular biology is only Weaver's second most important contribution because the Green Revolution has all sorts of Rockefeller Foundation and Ford Foundation impacts all over it.
So if I were going to do basic research with the $100 million, I would try to find some area that I thought $100 million over 10 years or something could go a long way with that kind of thinking. But I think what that would really take is me saying, "We're going to do it like this. I would like you, Warren, and potential Warren Weavers, to come to me, and one of you will get these funds, and we're not going to have a bureaucratic foundation. It's going to be just you. We need to make sure that the area is scoped so $100 million is maybe enough, and you'll get one to two technical staffers to help you run around and find really good people to fund." But that would be it on the basic research end. On the applied research end, I would do more. If I said basic research makes me a little uncomfortable sometimes, just because, "Oh, I could imagine giving away the $100 million and nothing good coming of it," which is just something I'm uncomfortable with. I think I'm maybe not appropriately risk-averse to oversee that, so I would definitely need to find somebody else to do it. But on the applied research end, I have been doing all this writing on building more BBNs. Essentially, the very brief TLDR on what BBNs are is, I was talking about all those early ARPA contractors that I think were responsible for early autonomous vehicles or the ARPANET. What do I think these organizations have that make them pretty dissimilar to existing R&D contractors today, like a Raytheon or Charles River, as well as academic labs? To me, there are three pillars that make a BBN. The first one is they're exceptionally novelty-seeking or technically ambitious. I think a lot of academic labs have that; it's an FRO-style technical vision. I don't think places like Charles River or Raytheon necessarily have teams with a Henry Lee or somebody like that walking around who have some technical vision that nobody thinks is possible.
Host: You might even briefly have to say what a BBN is.
Eric Gilliam: Oh yeah. Sorry. So, where does the BBN acronym come from? It comes from BBN being the main ARPANET contractor. The name is Bolt, Beranek, and Newman. It was essentially a firm that spun out of MIT. It'll be famous to Silicon Valley folks because this is where J.C.R. Licklider left MIT to join. There are all sorts of interesting testimonials from the MIT professors at the time where they called BBN "the cognac of the research business" or "a hyped-up version of Harvard and MIT," but where you don't have to teach classes. The reason is, you have people like J.C.R. Licklider joining who joined it because they had a technical ambition, and the university wasn't the appropriate place to pursue it, because it would often be too engineering or systems engineering focused where you had to turn around a product for an actual customer. So, when Licklider joins BBN, he says, "I want to build interactive computers." To do that, "there are clearly two technical areas we need to focus on. We need to work on real-time computing," because computing was in a batch processing paradigm, "and we need to work on all sorts of UX improvements." A BBN, in its own way, pursues an FRO-style technical vision with a mix of contracts and grants. So with those things in mind, Licklider convinces BBN to buy a big computer for him. He actually got them to buy two. Then he said, "Okay, let's go sell some contracts." He did it not saying, "Let's revenue maximize; we want to be the most profitable consulting firm possible." He said, "This is just an applied lab. Contracts and grants are the thing that pays the bills." So they go down to the person who runs the NIH Clinical Center, and they essentially convinced this person that hospital administrative records in the early 1960s—they did this—are clearly going to be computerized at some point. "Let us build you the first system now." They didn't do this because they actually wanted... the BBN folks didn't care about electronic medical records being digitized. What they saw was the opportunity to fund a novel technical system and allow all sorts of engineering iteration cycles at a break-even or slightly profitable way. They do this throughout the 1960s, which is very relevant, and very relevant because by the time the ARPANET contract comes from ARPA and the people over there, they've already been de-risking this technology for five or six years. So when the ARPANET contract comes, and they say, "Oh, we want people to build us these IMPs," essentially these computers that connect to bigger computers, it's this real-time computing problem. This is the one group that's been de-risking this, and by the time the money left the ARPA account, the first nodes of the ARPANET were delivered and functional within nine months, on budget, too. So this is the kind of thing they're working on where you're like, "Oh, that's not really a university problem." I won't bore people with the CMU autonomous vehicle story, but essentially, CMU also had a pretty similar founding story, but in a nonprofit setting instead of a firm setting. I think this is very exciting for engineering-adjacent fields.
So I could imagine the number one thing I would do, even more than I think I'm not as great as Warren Weaver—if I could find a Warren Weaver, maybe I would reconsider and do that. I would set up shop with the $100 million and I'd say, "I want to wholly fund, wholly seed fund, BBNs or would-be BBNs that want to service some ambitious area of R&D." Is it because an area of research needs a systems engineer, testbed technology, like a group to tie it all together and make it workable, expand the ambitions of the field, whatever it is? Or do you just want to service a field and you think it could be 10 times as productive if they had some group that serves a non-VC market? I think there are all sorts of... we're talking about the ARPANET and autonomous vehicles. You could imagine life sciences CROs with a risk capital budget to do really ambitious stuff or staple longer strands of DNA together. You can imagine it in the social sciences. I think there have been a lot of things. Adam Marblestone and I have talked about economic complexity and how people don't seem to be operationalizing that work because there's not really a VC market. But it's very exciting. I was talking to a cognitive scientist the other day who was essentially explaining to me how all these different cognitive scientists will write papers, essentially guessing at how our brain approaches something like mechanics. So, how's a ball going to flow in this simulation? What simplifications is our brain using? People have different competing theories; is the appropriate thing to mix them, yada yada. The field is like, in the same way that people would leave Lincoln Labs because you're like, "Oh, it's an applied paper study." You can imagine a world where if somebody wanted to seed two, three, four million dollars, some groups saying, "Oh, we're going to take all that literature and try to mix the models appropriately and figure out technology to apply everything they're learning accurately and also give them good feedback cycles." I don't know if that's where I'd put the $2 million, but if I was obsessed with that field, the fact that the MIT Minds, Brains, and Sciences groups don't have testbed technology builders is the kind of thing a BBN could address. So I'm sure we would run out of money before we ran out of good ideas from very excited postdocs with engineering focuses.
So I think a lot about BBNs too. If anybody here is listening and says, "I think I have a BBN," the reason I've gone on board to Renfield and I'm doing less research work is with ARIA, we're trying to help them build BBNs to service their portfolio. I also want to do it in America and elsewhere, too. Finding great founders is the first step. I'm not a genius; I'm not you all. I want to help the geniuses of my time. So if you all have ideas, please email me at gilliam@renfield.org.
Host: But you wouldn't want to, let's say, constrain it to a specific area within you're specifically interested in, in terms of these people.
Eric Gilliam: I would consider it, I think specialization is big, where what I would, if I was going to specialize, what I could imagine is taking $50 million or $60 million and putting them to the side and saying, "Those are funds that get spent from year four onward. We're just going to invest them in bonds for now or something." Then going in and saying, "We are going to invest in you all. This is this stage of the program. The first four years is like the 20% side bets in Weaver's portfolio or something. We are looking for the thing we want to dump $60 million on. We're not confident in it yet, so let's seed a few of these and we'll see how it goes." To me, I think it would just be very exciting to just... you could just imagine having a life sciences version where you just have a building in Cambridge, MA. I don't know what they'd be doing. You can imagine embedding them in life sciences labs. You can imagine them going around and figuring out a few common areas and working together and partnering with the labs. I don't know precisely the way it would work for each area; it would depend on what you got pitched and the vision of the person. But it's the kind of thing where this is not just early ARPA informing this learning, like for readers of the Substack. They also know I'm obsessed with early MIT, and early MIT was very committed to, "We serve industry above all else." The early applied chemistry department at MIT, at one point during something called the "technology plan" where they said, "Oh, MIT's kind of broke, this isn't going well. There's nothing more productive an Institute of Applied Technology can do than apply IT services at a reasonable cost." We would now know the applied chemistry department as a ChemE department. They essentially helped create the modern field of chemical engineering through their contracts and some of the basic research going on there at the time. When we think of MIT's great World War II contributions, what's important to remember is that was the MIT that was contracting with the government. It was in the process of changing because in that mid-century period, industrial R&D labs were becoming more and more common. So the thought of MIT being like, "Hey, 75% industry contracts is a lot. Maybe this is not the good balance. Maybe something like 50/50 or less, or whatever." Also, the people at GE Research, DuPont, Bell Labs, a lot of small R&D labs at the time—hundreds, I think—there were more and more PhDs or good MIT undergrads working at these places. We could take a step back. What it means to be applied is we want to work on things up until they're on the doorstep of industry. They don't need venture funding or things like that. But since industry is getting more basic research focused, we ourselves should also be a bit more basic research focused.
So MIT was in the process of just recently having been the group that was willing to do stuff like have 80% of their department be industry contract funded and bringing in a Princeton person like Compton, who was to be president, who was a Princeton physics department head—Princeton's always been pretty basic research focused—but also he was one of GE's best physics contractors. So they thought, "Oh, maybe this is the person who could shepherd us into that era." But I think all sorts of the most exciting things that happened in the 20th century that we can think of came out of this applied contracting. Servo mechanisms would be one, for example, that would have come out of that era of MIT. I don't think we'd get servo mechanisms out of MIT today. So it wouldn't really be reforming MIT; it would be building a new one. Not building a new one. I think MIT is still the world's best at what it does, but it's changed. Right now, a person can say, "Do I go to MIT, Harvard, or Stanford?" or "MIT or Harvard?" to make it simpler. You go, "Oh, that's tough." They used to be such differentiated offerings that it was like if you were from a Harvard, rich Harvard family in Boston, they would consider having an intervention to be like, "You're going to go to the factory people? What are you doing?" In its early years, MIT was a place that said, "You're going to be a factory foreman who knows science." That's how you solve problems: you have a scientific background and are handy. So while I think it's hard to reform MIT today, the reason BBNs are so exciting to me is I think MIT is still this fantastic top of funnel for great applied research ideas, but I think there are all sorts of contracts they won't do today. I think there are all sorts of technologies that are pretty applied and not novel in the way academia would like them to be. You're maybe spending your time reducing error rates to something like the ARPANET; you need it to be a utility like that, that needs to have an uptime of 98% or 99%. MIT very likely would say, "Oh, we got it to work once or in this context, why do we need to make it smaller to work 99%?" You could raise an FRO around it, but it's just often hard to raise $15, $20 million. It's a lot of money. It's maybe one of the meta-science miracles of the past five or 10 years that we figured out a way to make it start happening and start building this case that the NIH or NSF should be doing this. But I think a lot of BBNs really might only need $500,000, a million dollars upfront, and some warm introductions to customers or things like that. In a way, if more of them existed, you can look at it as recreating old MIT in the aggregate, at least in all sorts of areas. That's why I think about them. I don't want to be in the business of competing with MIT if I can't convince them to do this thing that they used to do that I love, but maybe you could bolt it on and even set them up in Cambridge, or somewhere else too, somewhere cheaper like Pittsburgh.
Host: It does seem like it was a pretty magical place, especially back in the day. I know that, for example, our founders, Eric Drexler and Christine Peterson, I think met at MIT, or at least had the idea for starting Foresight while they were at MIT. Just hearing the old stories from the archive, it just seems like an absolutely magical, wonderful time. But I'll leave it at that. I have so many more questions, you have no idea. But I'll hand it over to Beatriz because we need to dive into some of the more future-directed, existential hope part of the interview. But that was wonderful. Thank you.
Eric Gilliam: Okay, great. And no matter what, I will potentially tie history stories in, and I guess I think about the future by thinking a lot about the past. So...
Host: I love that. So that's great. In general, just inspiring to see someone be able to use historical studies for this type of applied things. I really, it's just good that someone is doing it, basically. But, okay. So the first thing that I want to ask you is, I think one of the things that you've already mentioned a little bit in this conversation is this idea of branches. Yeah, and new branches, I think, or like branch findings. So, in the context of existential hope, do you think there are any new and exciting branches of science and technology that you're seeing right now, or that you'd like to see come in the near-term future?
Eric Gilliam: I guess this is one of those where I am probably not a good person to ask, because I want to talk to people who think there's a branch somewhere. But I'm really just, as I said, I couldn't even see physics through the first course or two during undergrad, or I did my computer science, but I was just struggling to keep up. So it's one of those things where when people come to me with an idea, I either, sometimes I trust them, very often I run them by somebody who I think I've at least vetted as operationally competent in the area. We both speak the same language, I think they're very good, and I'll ask. So I'm a bad person to ask.
What I would say is, if anybody thinks there are certain commonalities in new branches, very often new branches use some existing breakthrough and combine it into a new field. If you were to say, "What do a ton of new branches have in common?" what Warren Weaver was doing with molecular biology at the Rockefeller Foundation is saying, "We have new modes of thinking and new instruments that we've used to attack physics at this very small scale that the biologists, as a microscope-level science, have not necessarily been thinking about. Let's apply that to biological problems. Here's why I think it could apply." So I think problems of that shape are always very exciting to me, because we figured out a lot of the stuff, and it also makes sense why we haven't necessarily applied them yet. This is also why, while I do think it seems like when I talk to folks in Cambridge, Massachusetts, they have a lot less faith that scale—even the ones who have worked in AI and things like that—they seem to have a lot less faith that scale will solve all sorts of problems their labs work on. So I think sometimes the West Coast is... I'm very much a fan of AI. Only when I'm in Berkeley or SF do I feel like I'm bearish or something, because I feel like people there are big fans. But I think they're right in saying, "We've made this new breakthrough. It makes sense that we should just start applying it wherever it fits in this new field." So I am generally excited by that.
Also, even if I don't trust myself, even if you go back to the 1940s, Warren Weaver has an essay on "Science and Complexity," where essentially he says biology is this fascinating problem of organized complexity. That was one of the reasons he was fascinated by computers. He said, "Computers are fantastic at modeling this kind of thing. I think we're going to see a century of biology and life sciences work built on computers that are really good at modeling problems of organized complexity." So I think all sorts of things like applying computing to the life sciences is exciting. I think saying, "And the tool will work off the shelf," is historically tough. So I think there's a lot of bespoke work to do, or potentially I could imagine BBN for... if one was going to specialize, you could say you're going to be the BBN for every application for every area of the life sciences, and you all are going to do the five years of work to find ways to make the off-the-shelf CS models as productive as possible. We know it's hard for you all to keep software engineers in the lab or things of that sort, but I don't have any special ideas myself. This is essentially me saying, "I am very excited about computers and biology. I am sympathetic to the folks in Cambridge, MA, who are like, 'We're excited too, so we use them, but we think some bespoke work is needed.'" That's the shape of stuff that's exciting. But also, I think there are all sorts of other exciting ideas too, like early information theory, I think is one. I don't think I have a good mental model to have caught, but I guess one could say, "Oh, that's also the same. You took Boolean algebra from philosophy and you applied it to this practical problem." But there are all sorts of ways to come up with strokes of genius on what could be a new branch. I think I have a lot of opinions on how we should restructure scientific funding systems to make sure those people get funding. I don't have good opinions on how we know. I think we just have to be willing to actually go over budget every so often with the understanding that funding information theory once is worth something in the tens or hundreds of billions.
Host: Yeah, I think that's a good answer, certainly. And I think AI for X is something we've talked a lot about lately at Foresight. We have an AI for Bio workshop coming up, I think in just a few weeks now. It's also something that I think Tom Kalil, who you worked with at Renaissance, spoke about at Vision Weekend last year, I think it was, where he spoke about basically how AI for Bio, like protein folding, like AlphaFold, how that was a really good case of using AI for Bio. It's a recommended listen, I think, for people. It's on YouTube, and it's just 10 minutes, basically. Yeah, that's awesome. Yeah. And so another thing that I wanted to check in with you about that I know you've... it's a recurring theme in your writing... is this idea of "why are we doing this?" Which is very big for us at Existential Hope, I would say, is like, why are we working to advance all these science and technology? For you, I think one thing that I've seen as recurring is that you say that it should be measured by its success on human well-being, basically. Could you break that down a little? Both in terms of what ideas do you have for what that could mean in terms of human well-being? What is that practically? And also, how can we do that? Do you have any ideas, or have you seen anyone do this well before?
Eric Gilliam: Okay, okay. So I'm just jotting it down. The first piece I ever wrote for the Substack, which nobody reads, partly because I think they like me in my bucket as "applied history boy," which I think is fine—that's the service I provide. But to me, it's the reason why I write the Substack. I don't know if you guys are going to ask me about two books, but I've already spoken about, "Oh, I think it would be fantastic for people to read The Making of the Atomic Bomb because it was an exciting book." I think some people think science is boring, but scientists are interesting to the point of sitcom characters. I think it's fantastic. But there's another book that I think really gets at the "why do we do this? Why is this worth doing?" And I think Robert Gordon's arguments in The Rise and Fall of American Growth would charm me to spend my time on this. I'm reflexively drawn to this because I like scientists and I like science. But the reason I can continue to stay, whereas I guess in my old life I would, like, I started an EA-adjacent charity... why can I justify being here beyond just liking it? In The Rise and Fall of American Growth, I think Gordon does a really good job of saying, "Yeah, TFP, TFP, TFP, but what does it even mean when you're like, '$2,000 of spending power 10 years ago, I'd be homeless; now they're not homeless?'"
Host: What does TFP mean?
Eric Gilliam: Oh, total factor productivity. Just people who like obsessive productivity measurements. And Gordon does an exceptional job of being like, "Here's the TFP growth that means nothing to you all. We're going to talk about when the washing machine came around. I'm going to tell you how the mother in the house, the sisters, how their days were spent before that and after that. And how hard it was to wash clothes and getting eight or 10 hours a week back in your life of not just not having to do that, but this was terrible." Carrying buckets a hundred yards and going back, it was just this whole day of backbreaking labor. He's like, "This is what's baked into those numbers." He does a really good job of that. So I guess in the first piece where I make the argument more coherently and summarize it, I think there are practical to-dos that would come out of something like Robert Gordon's book. It's something like, "How do we help people buy more housing or more food or live longer?" There's a fourth and a fifth. It seems like there are all sorts of things that happen in the Silicon Valley world, where we have this fixed amount of screen time we spend, or leisure. "How do we increase the amount of leisure time?" would be another Robert Gordon one. I think it seems like a lot of things that VCs fund because they're profitable are often about, "We have some fixed amount of leisure time; how do we make it better?" That would be what Netflix's addressable market is. But to me, what's really exciting about funding scientific R&D is not, "Oh, because it's cool." I think understanding things is great. But the concept of, "How do we help people? How do we help six hours of sleep feel like eight hours of sleep? Or buy more food for less money, or more housing, or live longer?" I think the longevity people have it right. To me, that's an area worth burning $100 billion off until we find something exciting. So it's those types of questions that are, to me, very exciting. But I think also, there's nothing wrong with working on something because it's cool and you could find funds to cover it. I think in so many ways, scientific history is driven by nerds who were just obsessed with something. The Wright Brothers wanted a glider. Wilbur Wright had exceptional taste, and he just wanted to make the glider; he thought it was beautiful. Orville wanted to build a car and an engine, because that's what the tinkering people in Dayton at the time were working on. And Wilbur was like, "It's crass, it's loud, it's disgusting. This thing we're making is beautiful," and that's what they wanted to do. They even turned down funds just so they could continue working on it themselves. I think a nerd with an ambition that they happen to be able to fund themselves is a beautiful thing all on its own. But I do hope that if that's the top of the funnel, that some amount of it is improving lives in those Robert Gordon-esque, very practical ways.
Host: Yeah, that's great. It's a good reminder of the grim old days, I think. Yeah. And so if you think about all this work, and the futures that it can enable, do you have a favorite vision of the future that really excites you? If I ask you, Eric, what's your existential hope vision of the future, what would your answer be?
Eric Gilliam: So this is one of those questions that I don't think about a lot, because I don't fancy myself as any kind of visionary. But I guess when I look backward and I see all the great... I often look backward at something like a really great basic research funder like the Rockefeller Foundation and Warren Weaver, or great applied research shops like early MIT or BBN, which gave us the ARPANET, or CMU autonomous vehicle groups. I think, "Oh wow, sometimes history gets rid of things for very arbitrary reasons, like universities just decide... the NIH is funding more stuff, universities decide to specialize in that. You get bigger and more bureaucratic; it's hard to respond." I think about the future oftentimes by looking backwards and saying, "Maybe what do we get rid of that didn't make a lot of sense? Is there a practical way to bring it back so the visionaries of our time, who do think about questions like the one you're asking and have a very clear vision in one way or another, have an additional perch from which to pursue their technical ambition?" So I think, "Wow, how do I get 50 BBNs set up in Cambridge, MA, and 20 NSF, and five around Carnegie Mellon?" And what can those people do to change the future, and how can I make it so, let's say, right now, maybe if all things go well with ARIA, maybe we can help spin up two or three BBNs a year with them. If that is—these are all organizational experiments—if that goes well, how can I make it so five years from now, we have 10 total, and 10 years from now, there's 50? I guess that's the step I think about, and I hope that people with more vision for the future can use those positions to do a lot more. Maybe I can be a little footnote in their memoir history book or something.
Host: Yeah. I think also just we can advise people to read your "Freak Takes" Substack. Do you have any... I think that's very inspirational, and looking back at history is inspirational also for the future, like we've said a few times now. But, do you have any favorite, if you would recommend people to read one article to see if they like your Substack, which one would you recommend?
Eric Gilliam: Okay, can I give two or three, but in categories? So you all only have to read one of these. If you're obsessed with basic research and how we do basic research more ambitiously, there's one called "A Report on Scientific Branch Creation: How the Rockefeller Foundation Bootstrapped the Field of Molecular Biology." That's the story of Warren Weaver and the Rockefeller Foundation. I look into all the budgets, where his thesis came from, how he did what he did. If you're somebody who's been compelled by the vision of BBN that I've talked about today, and you think that's exciting, you want to do that yourself? I have a piece—I don't remember what I call them; I picked the titles at the end. Essentially, it's "A Scrappy Complement to FROs: Building More BBNs." In there, I synthesize pieces on other BBNs. If you just think these stories about applied research are awesome, I have a progress studies history of early MIT, where part one is the ethos of the place, and part two is their approach to applied research. People seem to really like that. Before I wrote it, I thought the MIT people were going to be mad at me, but I've had, I don't know, coming up on maybe a hundred different MIT people be like, "Yeah, I wish it was that's what I thought it would be. That's not what the place is anymore." MIT is the best in the world at what it does, but they're not the kind of place I describe in part one. So I think a lot of early MIT and different life cycles of MIT inform a lot of advice I give to people in different things, because it really just was this bastion of applied American innovation in a certain way for 100 years. And it still is, in a pretty different way, in a way that's more similar to Harvard or Stanford today.
Host: And do you feel hopeful that we're moving in the right direction? Do you see things getting better overall for the whole progress field?
Eric Gilliam: Yeah, so I talked to... I did an interview for Asimov Press with Allan Daer. I don't know which of these will come out first, but somebody like Allan Daer has been in this, I guess you can call it, the applied meta-science field for so long, for so much of his career. It seems like it's not lonely anymore. Maybe we're in the era where you have to be pretty entrepreneurial or network pretty well, which is limiting. There are all sorts of people who get left out. For example, I live in the Midwest. I don't think any of the good engineers at the University of Illinois Urbana-Champaign or Purdue maybe think about this or know these options exist to them. But all of a sudden, it's, "Oh, if you're really entrepreneurial, you can get your way into it, and there are next steps, and maybe you can get a Foresight Institute grant, and you pitch them a vision where I think it used to be the number of things you can do and the number of people granting this or thinking about this were just so much smaller." So I do think things are getting better. Also, even if none of this applied meta-science stuff was going on, which is very exciting, I love it. I think a lot of times the American R&D bureaucracy still iteratively gets... they often make quite good decisions in a vacuum. It's just, I think over time, they get a bit sclerotic or have a lot of rules or things like that. But even so, it's easy for somebody like me to think about all the things the NIH doesn't do. But when you start looking at all these other countries, like, I'm often, when you look at different European countries' R&D budgets, I guess I'm just... I start to feel very thankful that Americans do think it's a no-brainer for the most part. Administrations get weird to spend $40, $50 billion a year on something like the NIH, or $8, $10 billion a year at the NSF. So I think it's also good to be thankful about what we have. But conditional on that, I do constantly every day look back at the history and then, in my head, complain a little bit about something they got rid of that I think was dumb, and I want to scheme a way to bring it back.
Host: We've reached the last minute we have together, so I'll just ask you one last question. I'll also just say that I think we should list—I think there are a lot of, like you said, it's not so lonely anymore to be interested in meta-science, so that's actually really nice—and we should link a bunch of these recommended resources also to the episode. But my last question is just, "What's the best piece of advice you ever received?"
Eric Gilliam: I guess as the way my brain works, I don't bake in individual pieces of advice and fully come to terms with maybe how much impact they had on me. So I'm not sure I have a good answer to that. But I think of my mentors a little bit as the people I read in the books too. So there are random things Steve Levitt used to say—he's alive. But there's also... I don't know, there are a lot of minor things that really get to me from the books I read, because initially, now I do this professionally, but in the beginning, I read these biographies and I wanted to be their friend, and that was what got me into it. I guess at the top of the list, somebody who I constantly go back to their writing—I'd like to write a book about them one day when I find the time—is I think Warren Weaver was a great person for the most part, and he's very down-to-earth. He's... I don't know, I think he approached the field of scientific philanthropy, and he was going for big hits, big wins. There's a certain Silicon Valley bravado that often goes along with people who do that today. He was just the most down-to-earth Midwestern guy. To me, there are just all sorts of ways he approaches problems and humbleness with dealing with things day to day that I think rubs off on me. It's also part of the reason I live in the Midwest now; I stay here. It's not that this is a professionally useful place. I think there's a lot of the down-to-earth nature of the place I'm very open to rubbing off on me. I may be overly ambitious in all sorts of my professional endeavors and surround myself with people like that. I like being in this place where I think people have their priorities straight, and it's... I don't know, I think it's nice to see somebody who did things at the highest level also bringing that approach to it. I know that wasn't an answer, but if anybody is watching, Scenes of Change: A Lifetime in American Science is also a very good book. If you know anybody at Simon & Schuster, or is it Penguin on Simon, whoever owns Simon & Schuster now, or Simon & Schuster, let them know I'd love to buy the IP rights to the book. They won't email me back because they won't print it. I just want to print it so people could have it. Sometimes it's like $100 for a copy.
Host: Yeah. Oh wow. I think if I was to sum that up, it sounds like your advice would be: find good role models and read good books, basically.
Eric Gilliam: Yeah, baked into that is I'm a deep believer in the mentorship model, so find a way to get a good mentor or pick up a good book and try to understand how that person's brain worked. A lot of great people have lived before, and a lot of them are dead now, but they used to write stuff down at quite a high rate.
Host: Yeah, even that's even impressing the... so for the last few thousand years, so that's great. Yeah. That's all we have time for, but thank you so much, Eric. This was really interesting. I've learned very much in this interview. Thank you.
‍