Podcasts

Sam Arbesman | On Vibe Coding, AI, and the Magic of Code

about the episode

Is code just a technical skill for engineers, or is it a deeply humanistic art form capable of expanding our minds? In this episode, host Beatrice Erkers is joined by scientist, author, and Coder-in-Residence at Lux Capital, Sam Arbsman, to explore the profound ideas in his new book, The Magic of Code.

Sam reframes our relationship with computing, arguing that code is one of history's most powerful "tools for thought," standing alongside the alphabet and paper in its ability to augment human intellect. He delves into the fascinating history of this idea, from Don Swanson's concept of "undiscovered public knowledge" in scientific literature to the modern potential of AI to connect disparate ideas and accelerate discovery.

The conversation also explores the democratization of creation through "vibe coding," the power of thinking of an app as a "home-cooked meal," and the critical importance of humility as our technological systems become too complex for any single person to fully understand—a theme from his previous book, Overcomplicated. Sam connects these ideas to the ever-changing nature of knowledge itself, drawing from his first book, The Half-Life of Facts.

About Xhope scenario

Xhope scenario

No items found.

Transcript

Sam Arbsman: When we think about simulation, it's very powerful and very sophisticated. We have to think about what it can be used for, as well as constantly approach modeling and simulation and the power of these computational tools with just a great deal of humility. Certainly, one of the areas that I think is very ripe for this is just the whole realm of scientific discovery. It's not just, "Oh, now that we have AI, we can do this kind of thing." It turns out people have been talking about this for decades. So, back in the mid-1980s, there was an information scientist by the name of Don Swanson. He said, "Imagine somewhere in the vast scientific literature, there's a paper that says A implies B, and then somewhere else in the vast scientific literature, there's another paper that says B implies C." So if you had actually read both of them, you would know that maybe it's true that A implies C. You can combine them together. He called this "undiscovered public knowledge." This idea that there's knowledge out there and it's public, but it's undiscovered because no one has actually connected these things together. He used the cutting-edge technology of the day, which was, I think, keyword searches in a medical database, and he actually found a couple of different examples. Since then, we've been developing more and more sophisticated ways of stitching all this knowledge together, which is ultimately really just part of this tradition of tools for thought.

Beatrice: Today I'm joined by Sam Arbsman. You're a scientist, author, and a coder, and you're the author of this new book called The Magic of Code. And it's really something that we're going to be diving into this episode. But for now, could you maybe introduce yourself, tell us what you're working on, and then we'll dive into all the coding stuff after that.

Sam Arbsman: Sure. Hey, so great to be chatting with you. I'm Sam. My current role is Scientist in Residence at Lux Capital. So Lux is a venture capital firm. We have over $5 billion under management and we are in the realm of emerging tech, frontier tech, deep tech. People use lots of different terms. I think probably the most accurate way to describe it is we're playing at that boundary between science fiction and science fact. We're just very interested in trying to take really crazy, out-there ideas—whether it's in AI, robotics, biology, outer space—and try to bring them into the real world. My role is not on the investment team. My job is to really survey the landscape of science and technology and find areas or individuals or communities or topics that could be relevant to Lux and kind of bring them into the orbit of Lux. Sometimes connecting those to the companies that we've already invested in, but more often than not being pretty far upstream from investment and laying the groundwork for that, laying the groundwork for when those ideas and topics might be relevant for investment. And the truth is sometimes those areas are never ripe for investment.

So I actually spend a lot of my time thinking about non-traditional research organizations, but it happens to be that some of the most interesting things and most interesting people are working in those spaces. And so I kind of just bring them into the orbit of Lux as well. The way that cashes out is I spend a lot of my time engaging with the public and writing and speaking and talking to interesting people. And so part of my writing is I wrote this new book, The Magic of Code, but this is actually my third book. I have two previous books: The Half-Life of Facts, which is about how what we know changes over time, and Overcomplicated, which is about the increasing incomprehensibility of our technological systems and how to think about that. And then yeah, I do a whole bunch of other writing on many different topics. But yeah, I'm excited to chat with you.

Beatrice: That's such a dream job, it seems, at least from the outside.

Sam Arbsman: It is a lot of fun. I feel very fortunate.

Beatrice: Well, I'm sure you worked for it. So I'd like to start with your current book that you have out now because, like I mentioned, it's called The Magic of Code. And could you just, before we dive in too deep on this—I'm not a coder myself—so I'd love to just get an explanation from you. What do you talk about when you talk about coding? Like how would you explain what that is, even if we start very basic?

Sam Arbsman: Yeah. So to be honest, what coding consists of is a moving target. It's changed throughout the history of computing. Very early on it was like flipping switches or plugging things in. Then it moved towards writing things in binary or machine code, and now we have more human-readable code. It's going to change going forward with AI. But ultimately, I would say the most common way to think about it is text that is a logical, rigorous way of telling a computer what to do. The more highfalutin way to think about it is it's taking ideas and maybe algorithms and rules or concepts in your mind and somehow instantiating them into instructions for a machine.

The most common ways are using the popular languages that we have like Python or C++ or Java or JavaScript. And these are all specific languages that have features of natural human language but also features of things that border on mathematics. They're this weird in-between thing where there's a certain type of syntax, a certain set of rules, and you write code as text and then you give it to the machine to run. And so it's this weird thing where it is text, but it has this impact on a machine. You are actually controlling it, which is one of the reasons I write about the magic of code. It ultimately is the embodiment of things that we've been desiring for millennia: using text and words to control the world around us. To a certain degree, that's wizardry and sorcery, and we can now do that kind of thing using code. So I would say that's a high-level description. I'm happy to dive in more into lots of different details there.

Beatrice: Well, why don't we also dive into why you wrote the book? What brought you to write this book? Because I'm assuming you thought a lot about coding, obviously, and so I think that would also help us structure maybe all the different facets of it, because you go into different angles of it throughout the book.

Sam Arbsman: Yeah. So I would say there were a number of different reasons that caused me to think about this. Certainly, one of the things that I think about right now is when I look at the current conversation in our society around how we think about technology and computing, it feels somewhat broken. For most people who are outside of the world of code and software, many people are adversarial towards it. They're worried about it. Sometimes there's a certain amount of ignorance, and the truth is a lot of these concerns are valid.

But when I think about my own childhood experiences with computing, while there were certainly elements of concern and control, it wasn't really like that. When I learned about early computing and computer science, I didn't even view it as just a branch of engineering. I viewed it almost as this humanistic, liberal art that, in the process of thinking about computers, would connect to language and philosophy and biology and art and how we think and all these different areas. My attempt with the book was to rekindle some of that wonder that I remember and try to give people—both who are experts as well as people who are coming from the outside—a sense of just how grand and all-encompassing thinking about computing can be and try to rekindle and bring back a little of that delight.

When I think about my own childhood experiences with computing, it wasn't just worries. It was also fractals and weird algorithmic art and SimCity and early computers like the Commodore VIC-20. It was a whole bunch of weirder things, and the truth is a lot of those kinds of ideas are still around now, but sometimes we've just forgotten them. And so the book steps through various aspects of code and the way in which it impinges upon aspects of our lives, but from the perspective of thinking of it as almost this humanistic, liberal art that touches upon all these different domains.

Beatrice: I think one of the things that I picked up on that seemed really interesting to me was that you said that coding is also a tool for thought. Can you maybe unpack a little bit more what that means and how you think that it changes the way we think or reason about things?

Sam Arbsman: Yeah. So the idea of a "tool for thought" is really just a technology that changes or enhances our ability to think about various topics. And the idea of a tool for thought—people talk about it more right now, and they often think of it in terms of computing—but ultimately it is a very old technology if you think about it broadly. The alphabet, literacy, paper—these are all tools for thought. The fact that you are often told by your teachers to "show your work" when you're doing a math problem... not only is that a way of being able to get partial credit, but it also actually enhances your ability to think through things because you don't have to keep all this information in your head.

It turns out when it comes to computing, there are two different, interconnected ways of thinking about tools for thought. There's the idea that code itself can allow us to think through problems. For example, having a specific programming language or trying to instantiate a problem in code, that in itself will actually give you a better understanding of a thing. If you have a hand-wavy understanding of a problem or a hand-wavy approach to solving it, that will not work for a computer. Computers are very literal. They require a lot of detail. And so in the act of thinking through how to solve a problem and explain it so it can be done by a computer, you will actually have thought about the problem better.

Then there's also the idea of using computers themselves as tools for thought. This actually has a fairly long history within computing, where very early on people were thinking about computers not just as vehicles for massive-scale simulations, but also as tools for education. People have been talking about this idea of whether we can use computers and software as a way of elevating the kinds of thoughts we can have, whether it's connecting different ideas in scientific domains or helping with brainstorming. You can see there's a long history of this. It can be as simple as a spreadsheet. The idea of being able to model some simple scenario in a spreadsheet can aid your thinking. People involved in the early days of Xerox PARC and the early personal computer were also thinking about these kinds of tools for thought.

Of course, now with the advent of generative AI and LLMs, we have the ability to do this in overdrive. The way I think about this is ultimately, these large language models are taking huge amounts of text and information and embedding them in this virtual latent space—this high-dimensional space with lots of concepts and ideas. Once you have that, you can then navigate that and connect different ideas. So certainly, one of the areas that I think is very ripe for this is the whole realm of scientific discovery. And again, this is not a new thing. It's not just, "Oh, now that we have AI, we can do this." It turns out people have been talking about this for decades. Back in the mid-1980s, there was an information scientist named Don Swanson, and he had this thought experiment. He said, "Imagine somewhere in the vast scientific literature there's a paper that says A implies B, and then somewhere else... there's another paper that says B implies C. So if you had actually read both of them, you would know that maybe it's true that A implies C." But of course, no one has read both of those papers because the literature is just so big. He called this "undiscovered public knowledge"—this idea that there's knowledge out there and it's public, but it's undiscovered because no one has connected these things. He wasn't content leaving this as a thought experiment, so he used the cutting-edge technology of the day, which was keyword searches in a medical database, and he found a couple of different examples. I think one of them was finding a potential relationship between consuming fish oil and helping treat some sort of circulatory condition. And he actually published it in a medical journal, even though he was not a physician. Of course, since then, this has become a very rudimentary thing. We've been developing more and more sophisticated ways of stitching all this knowledge together. And I think now, there are a lot of people trying to use these systems to say, "Okay, how can we actually navigate this to come up with better hypotheses and ideas in scientific discovery?" Which is all ultimately just part of this tradition of tools for thought.

Beatrice: Just on that tangent, do you have any examples of which orgs or people are doing the most interesting stuff in AI for science right now?

Sam Arbsman: I mean, there are a number of companies and nonprofits that are all playing in different areas. Some are helping you look through the scientific literature. There's at least one company that I've seen that is maybe helping with hypothesis generation and even experimentation testing. I'm not sure how many of these are public, so I'm not going to give any details or names, but there are definitely a lot of really interesting things happening here. On the one hand, science has become so vast and, in some ways, a little bit more difficult. People have talked about whether the pace of discovery is slowing down, or maybe it's still continuing apace but we just need to put in more scientists and more funding. One way of potentially allowing for a speed-up is using this human-machine partnership and using AI to help with generating hypotheses and ideas. So there's a lot that is happening here. It's super exciting. Still all very early, but I think that's going to be one really interesting avenue in terms of how people think about science going forward.

Beatrice: If we think about these tools for thought in a more intentional way than we have maybe historically, AI for science is one angle. Are there any other angles, or if we consciously just try to make the most of this in terms of coding, what would you want to see?

Sam Arbsman: Another one is just in terms of how we think about education and retaining information. Oftentimes you might read a book and enjoy it, and then a couple of weeks later, someone asks you about it and all you can remember is the top-line takeaway. If you really want to understand some of the information in great detail, you might need new ways of absorbing that information. So for example, Andy Matuschak has been doing a lot of work on how to think about reading and books as a new type of medium, often in conjunction with digital versions where you're being tested for information. Then you use spaced repetition to eventually be reminded of these things and remember them. That is a really different way of reading than many people do, but it has the potential for being able to absorb a lot more information.

Going back to coding itself, another thing I think a lot about is that writing software is the encapsulation of the coder's thoughts and ideas into software itself. For a long time, that has been the domain of experts. So oftentimes, if you are a subject matter expert but not familiar with coding, you have to outsource that and work with other people, and you might need a lot of funding to do that. Alongside teaching software development, there has been another tradition, which is this idea of democratizing the act of software creation to allow anyone to write software themselves. I view that as another kind of tool for thought. If any single person can build the software they want, rather than having to think about a potential market of millions of people... historically, you haven't been able to do that. But this democratizing software movement, which has been around for a long time, really does allow everyone to use programming as a tool for thought. People have talked about this as "end-user programming"—the idea that the end user can modify the software themselves for their own needs. Rather than it being one-size-fits-all, I can manipulate it and make it whatever I want it to be.

Certainly, with generative AI and the advent of "vibe coding," it is changing and lowering the barrier to allow anyone who has ideas to build things. I'm reminded of a wonderful essay by the novelist Robin Sloan where the title is something to the effect of "An App Can Be a Home-Cooked Meal." The idea is, by and large, when we cook, we can just cook for ourselves and for our loved ones. We should be able to do the same kind of thing when it comes to our software. And this massive democratization—I mean, in some ways we're there, but I think there's still a lot left to do—that really opens up our minds to being able to look at the world and say, "Oh, if I have some interesting, weird idea for a game or some enhancement to the software that I use on a daily basis, that now actually is possible."

Beatrice: I love the metaphor of a home-cooked meal because then you can do it exactly how you like it, if you want coriander or not. I actually wanted to ask you about vibe coding, so I'm glad you brought it up. It's a term I've been hearing people use a lot more, and as I understand it, it's when people use LLMs to write code so you don't have to be as skilled. Do you have any takes on that shift in general?

Sam Arbsman: Yes. I think vibe coding, as it was originally discussed, was almost using a conversation with an LLM to generate software and then never actually looking at the code, just kind of taking things as given. One very interesting take by Steve Krouse, who's the head of a company called Val Town, is that vibe code is basically legacy code. Legacy code is software that was written a long time ago by people who are long retired or maybe even dead, so no one really knows how it works. It's essential for everything to run, but there's no real understanding of what's going on under the hood. That analogy is powerful. If we don't have the ability to interrogate the code itself and just take it as given, then we are abdicating a certain amount of responsibility for understanding it.

If it's for a home-cooked meal, maybe that's fine. But if you're using vibe coding for things that are going to be used by a very large number of people, that doesn't feel as safe or responsible. So for me, I wouldn't use vibe coding that way. But I think "AI-enhanced coding" is probably the right level because that encompasses a much broader set of activities and that is really powerful. It means I don't have to remember certain syntax details or specific libraries. I can go a lot smoother and faster in programming.

That being said, the ability to understand a certain amount of the code being generated allows you to manipulate the resulting code in a much more sophisticated way. So I'm of multiple different minds. On the one hand, understanding traditional code will still be vital because we want to make sure that AI-generated code is not legacy code. On the other hand, I do think this ability to generate code will lower the barrier to learning how to program, as well as just allowing people to try so many more different things.

The computer scientist Seymour Papert had this idea of "low floors and high ceilings." You want things to be very easy to start—a low floor—but you want there to be a great amount of expressiveness and open-endedness. That's the dream of low-code or no-code or vibe coding or whatever these things are. You want it to be easy for non-programmers, but as you gain more facility, you should be able to say, "The sky is the limit. I can do anything I want." I certainly think that AI-enhanced code will open up a lot of doors and allow people to build things in ways they never thought they could because they just thought, "Oh, I don't think of myself as a coder."

On the other hand, we have to still be very responsible with it. Right now, it's still very early days. It's great that we can generate code, but when we use AI systems, by and large we're using a chatbot interface, which is probably not the right interface for making sophisticated software, especially if you want it to be graphical and visual. Maybe we need very different sorts of interfaces. But I am cautiously optimistic for really opening a lot of interesting avenues, where it's no longer the job of programmers to say, "I know all the different features that people are going to need." Each person can decide for themselves. I think it's really a democratization of creativity at that point, and that's really exciting.

Beatrice: Yeah, that always seems exciting, just making it accessible to people in that way. It definitely makes me want to try it. I'm not sure how to phrase this question, but I feel like you touched on it a little bit with vibe coding, that with a lot of coding, it seems like we don't really understand it. Do you agree that's the case? And does that matter?

Sam Arbsman: It kind of depends at what level you're thinking about. One of the most fundamental ideas of computer science is abstraction. When you write a piece of software, you're not starting from scratch. You're not working with the raw binary. You can use all the different libraries and features that have come before you. That allows you to... if you want to do some sophisticated mathematical manipulation, you don't have to figure out the algorithm yourself. You can just take a programming package off the shelf. In that sense, yeah, I don't necessarily know all the details of how these packages are doing their thing, but I operate on the assumption they've been tested and are well-understood. I can just use the interface without having to dig under the hood.

That being said, obviously there are times when abstracting away the details does not work, and you have glitches and failures. That's the hallmark of not understanding a system, where there is this gap between how we thought the system operates and how it actually does. Into that gap fall all the failures and glitches. And certainly, as we build more sophisticated systems, these kinds of things become more likely because the systems we're building have a huge number of interacting parts. They might have accreted over very long periods of time, so we're contending with legacy systems where people do not fully understand these things. As a result, we do end up with software that we just don't fully understand, in the sense that it is too large and complex for any single human to possibly hold in their head at any one point.

On the one hand, it can be worrying. There are ways of making them more understandable, like modularity. For me, it's just a matter of being aware that we are already living in this world that is increasingly incomprehensible. I think most people assume when they're looking at technological systems, "Oh, these things are rational. They're made by people. Therefore, we should understand these things." And that's not really the case. So for me, I just want there to be a greater awareness of the vast complexity of these technologies.

I remember years ago when the Apple Watch first came out, I read some article about whether mechanical watches would still be popular. They interviewed this one guy and he said, "Of course I would want to use a mechanical watch. I think about the vast complexity of a mechanical watch as opposed to a smartwatch, which is just a chip." And I'm thinking, "Just a chip?" A chip is orders of magnitude more complicated, but we've been shielded from it. For me, I just want people to be aware of that complexity because understanding is not an all-or-nothing proposition. It's not, "I either fully understand this system or I'm in total ignorance and things are just going to go wrong." There's a really complicated spectrum, and we can slowly but surely move towards greater understanding. We're already well-positioned for doing that. When we think about biology, we don't fully understand human beings, but we have ways of trying to understand ourselves better, whether it's cognitive science or neuroscience or psychology. These are techniques to slowly understand vast, complicated systems. I think we need to use those same mental approaches to thinking about technologies of our own making when these things are so complex that they verge on being organic.

Beatrice: I think my favorite part of the book was the last part where you talk about reality, and you write a bit about biology and how biology and software increasingly overlap. Where do you see this going? Do you see a future for biocomputing or stuff like that?

Sam Arbsman: The way I think about this is, at a surface level, we can say simple things, like in the same way that computers use zeros and ones, biology uses the four bases of A, T, C, and G for DNA. But in the book, I discuss two ways of thinking about biology. There's the "information view," where you have streams of information being run and operated on. And then you have "the mess," which is that we are just wet, squelching systems. Down at the cellular level, there are massive amounts of randomness and stochasticity. Biology, I view it as a combo: it has computational features, but it is also in some ways radically different.

A number of scientists have talked about how biology has computational features. We're getting better at encapsulating biology in software through simulation, but we're also realizing that biology is an aspect of some larger thing, which is information processing. When people say, "Is a cell a computer?"—yeah, it's a computer, but not in the way that my laptop is. It is computing, it is processing information, but in a vastly different way. It's more parallel, it's much more stochastic.

The more we understand biology, it will give us a better sense of the true suite of possibilities for how we might compute. People have done things where they've used DNA as a means for factoring huge numbers or used slime molds to do optimization problems. I think there are some very interesting things around blending biology and traditional computing to allow us to think much more broadly. Also, in the way we write software, it's traditionally, "Oh, you write a little module, and it connects to other modules." Biology doesn't do that. A single enzyme or string of DNA might have a whole bunch of different functions that are all overlapping and happening at different hierarchies. There are scientists who call this poly-computing. It shows you there's just a totally different way of computing. I love thinking of biology as both deeply computational, and also as a window into so many other ways of computing.

Beatrice: Another thing that this leads to, that you write about in the book, is this line between artificial life and emergent behavior in software. What do you think about that boundary between machine and life? Do you think we'll reach a point where software entities could exhibit something like agency or intention or consciousness?

Sam Arbsman: The idea of "artificial life" is a domain of research that I think first became a thing in the late '80s and had its heyday in the 1990s. It went through a quieter period but is now having a resurgence, which is exciting. It's basically the idea of thinking about "life as it could be." Rather than saying the one form of life we can study is biological life on Earth, let's try to understand life by its most fundamental features and see if we can encapsulate those in computer programs. In the process, we can learn about how life works and maybe figure out the true breadth of what life could be.

One very basic feature of life is that it evolves. So there's an entire space of research around evolutionary computation and genetic algorithms, using the idea that evolution is an algorithm for allowing change. We can encapsulate those ideas within software and evolve computer programs to understand how evolution works.

But more broadly, will we eventually encapsulate life itself in software? Will software eventually be considered living? I'm going to be fairly agnostic on that one. I think in principle, that is certainly possible. I don't think there's any magic stuff overlaying life; it's a physical process. The more we understand it, we realize it has computational features. Are we there yet? Probably not. But certainly, between advances in AI and evolutionary computation, there are some interesting potential avenues. I'm very excited for what the future holds, less about actually embodying biology in life and much more about using software as a means for better understanding what life is. If we build a vast computational sea of virtual bacteria that exhibits every aspect of life, it will certainly teach us about how life works. I'm really excited about that.

Beatrice: You mentioned simulation also here. What can we gain from simulations?

Sam Arbsman: That's one of these things we've been doing since the advent of modern digital computers. You look back to the earliest computers in the late 1940s and they're doing large nuclear simulations. We've always been wanting to model and encapsulate the world in computer programs because it's an interesting combination of using mathematical descriptions to explain the world but also having those descriptions be dynamic. They're not just inert equations; they can come alive.

Through the confluence of massively more powerful hardware, better data, and better algorithms, we are now able to simulate more and more of the world with higher and higher fidelity. One example I discuss in the book is weather prediction. A lot of people unfairly malign weather prediction, but the truth is, our ability to simulate future weather has improved quite steadily over the past decade or two.

That being said, the world is really complex in ways that we don't even understand yet. For me, when I think about simulation, it's not just for prediction. One of the most important reasons aside from prediction is for understanding. When you build a very complex model of the world, it might have predictive power, but you often don't understand how that model works. SimCity is a computer game. Does it accord with the way cities work? No. It's a vast simplification. But when you play SimCity, as a kid, that was probably the first time I encountered a complex system that I could manipulate, that would have unanticipated consequences, that would bite back in unexpected ways. It gave me an intuitive understanding of complex systems, as well as humility in how they operate and how much control, or lack of control, I actually have.

When we think about simulations, we always have to have a certain amount of humility. We're nowhere near making a simulation so sophisticated it's as complex as the real world. I talk about Laplace and Lovecraft as these two extremes. Laplace, the French scientist, had these thought experiments around the clockwork universe and explaining everything. On the other hand, there's H.P. Lovecraft, the horror writer, who's like, "We barely understand this thin veneer of the world and there are all these horrifying monsters out there." Madness aside, I think we have to have a certain amount of humility. There's a wonderful book called Fluke by Brian Klaas, and in it, he talks about the sheer amount of randomness and unexpectedness even in systems that we think we fully understand. So when we think about simulation, it's very powerful and sophisticated, but we have to constantly approach it with a great deal of humility.

Beatrice: I was thinking that maybe we leave The Magic of Code a bit, unless there's anything you think we've missed that would be important. But I wanted to also touch on your two other books, because if you've spent so much time writing a book, it's probably an idea that you've thought a lot about.

Sam Arbsman: Sure. I've noticed one of the themes in a lot of my writing has just been this approach of approaching the world with a certain amount of humility. My first two books are very much about that, and I think even the third one is: computers are really powerful, but we have to think about their powers in context with our humanity and be humble and deliberate in how we use these systems.

Beatrice: So your first book is called The Half-Life of Facts. Could you maybe explain what you mean? We could even talk about it in an approach to learning something like coding, that maybe things are always changing is what I take from the book.

Sam Arbsman: Yeah. The idea behind The Half-Life of Facts is, on the one hand, we know that our scientific knowledge and knowledge about the world is constantly changing. The things we read in textbooks when we were young are no longer true. But it turns out, underneath all that change, there are regularities to how what we know grows, how errors get rooted out, how things get overturned.

The idea is that in the same way we think about radioactive materials—I can't predict when a single atom of uranium will decay, but if I take a whole chunk of it, it becomes very regular and understandable, and you can chart out its half-life—the same kind of analogy can be used when we think about knowledge. It turns out some people have tried to study how long it takes for half of the facts in certain fields to be overturned or become obsolete.

The book was a way to allow people to grapple with how knowledge is changing. Rather than saying, "Oh, I read this fact when I was young and now people are saying it's not true, so therefore I'm just going to throw my hands up," the truth is we can understand the world. Science is not a body of knowledge. Science is a rigorous means of querying the world around us. What we know is going to be constantly in a draft form. I was talking to a professor of mine who told me a story where he gave a lecture on a Tuesday, and then the next day read a paper that invalidated everything he had taught. He went in the next class on Thursday and said, "Remember what I taught you? It's wrong. And if that bothers you, you need to get out of science."

Science is constantly in draft form. Things at the frontier are more likely to be overturned because that's where we know the least. The book is trying to promote a means of looking at the world, a way of how to learn, how to constantly be learning and re-evaluating what we know, and maintaining a healthy skepticism.

Beatrice: I see the connection between your three books, because the third book is called Overcomplicated, and as I understand it, that is also about how we should interact with technology. Could you explain the core thesis of that book?

Sam Arbsman: That one I've actually discussed a little bit already, which is this idea that not only are technologies becoming more complex, but they are increasingly becoming incomprehensible, even to the people who have built them. The book looks at what forces lead to this point of incomprehensibility, such as legacy code or grappling with the complexity of the world, and then what do we do? How do we meet these technologies halfway if we cannot fully understand them?

When people are confronted with a system they don't fully understand, they immediately zoom off towards two extremes: fear in the face of the unknown, or undue reverence and awe. "Oh my god, this AI technology is amazing!" They almost have this worshipful attitude. The truth is you shouldn't have either one. You shouldn't be hopelessly worried or fearful, but at the same time, you shouldn't be worshiping these systems. They're made by imperfect beings. They're made by us. Not only are these two extremes not great, but they also cut off questioning. If you're so worried or so in awe, you're not curious about them. Humility says, "Okay, I might not be able to understand these systems fully, but trying to increase my understanding slowly and steadily is still something that can be achieved and is worthwhile."

Beatrice: Since you wrote these books a few years ago, have you updated or changed your mind on anything?

Sam Arbsman: With Overcomplicated, which came out in 2016, it was well before the heyday of these really complex AI systems. I think what I talked about in terms of technological complexity, I feel vindicated there. I would have loved to show more examples and give people a sense of the vast complexity of the systems we're now dealing with.

With The Half-Life of Facts, one thing I wrote was that I was maybe a little bit pessimistic about certain aspects of science. I wrote about how, for the most part, scientists only receive credit for doing new things rather than trying to replicate other people's experiments. Since I wrote the book, there's actually been a much bigger focus on reproducibility in the sciences. I'm really gratified that we're now focusing on that, and I'm glad that I was wrong, that people are now actually being incentivized to reproduce things that they thought were well-understood and it turns out maybe they don't actually hold up.

Beatrice: So I want to sort of bring it all together now. I have to read the complete title of your book because it's quite a long subtitle. It's called The Magic of Code: How Digital Language Created and Connects Our World and Shapes Our Future. The point I wanted to dive into is the "shapes our future" part, because this is the Existential Hope podcast, so we want to think about what futures we want to steer towards. When you think about how code could help shape our future, what do you think are the most optimistic or positive cases you can think of?

Sam Arbsman: Well, I love your framing rather than saying, "Where do I predict things are going?" and more about, "What is the vision that I want and how can we work backwards towards those visions?" Because that's really the key. We are building them, so we can be as responsible as possible to make the world that we want.

When it comes to code, certain things around AI-assisted scientific discovery hold a great deal of potential. But another aspect is just around the human-centered aspects of technology. Right now, because we have such large technology companies, most people don't really feel like software and computing is really being made for them. It's just, "Here's a thing that's imposed from above, now go use that." I think the ability to have more malleable software and these trends around democratization of software creation open the possibilities for creating a much more humane world, where people can build the things they want for themselves rather than contending with things being given to them that are often created for incentives that do not align with human flourishing.

I just want more software that's in line with our humanity. There's a TV show from a number of years back called Halt and Catch Fire, about the early years of computing. In the first episode, one of the characters says, "The computer is not the thing. It's the thing that gets you to the thing." If we focus on the computer itself as the thing we want to just make more and more powerful, we'll forget that ultimately, computers were in service of making us the best versions of ourselves, helping with education or tools for thought or better understanding biology. As long as we keep that in mind and say, "This is the reason we have computers," then I think we're in a good situation.

I think I end the book by talking about this organization called the People's Computer Company. They had a manifesto or tagline that was something about how "computers are for people." And I think we kind of forget that. So for me, alongside all the utilitarian and enterprise-scale software, let's remember that computers can also be things for people. They can be useful, and they can also be engines for delight and wonder. As long as we keep those things in mind as we build these systems, then things will be at the right human scale and it'll feel a lot healthier.

Beatrice: I agree. I think that's a great vision and also probably a great note to end on. Thank you so much, Sam, for coming and talking about this book and all your previous books as well, and for writing the book. We'll share it when we share the episode. And yeah, it was a recommended read from me.

Sam Arbsman: Well, thank you so much. This was a lot of fun. And yeah, I appreciate you taking the time to chat with me. This was wonderful.

Read

RECOMMENDED READING

People

  • Don Swanson: The information scientist who developed the concept of "undiscovered public knowledge."
  • Andy Matuschak: A researcher known for his work on tools for thought and new mediums for learning, including "evergreen notes."
  • Robin Sloan: Novelist and writer who authored the essay "An app can be a home-cooked meal."
  • Steve Krouse: Founder of Val Town, involved in the future of programming community.
  • Seymour Papert: A pioneer of AI and computer science education, co-inventor of the Logo programming language, and proponent of "low floors, high ceilings."
  • Brian Klaas: Author of the book Fluke.
  • Pierre-Simon Laplace: The French scientist associated with the idea of a clockwork, deterministic universe.
  • H. P. Lovecraft: The horror writer used as a metaphor for the incomprehensible and vast unknown.

Books

Organizations & Companies

  • Lux Capital: The deep tech venture capital firm where Sam is the Scientist in Residence.
  • Xerox PARC: The legendary research center that pioneered many elements of personal computing.
  • Val Town: The company founded by Steve Krouse, focused on a new kind of programming environment.
  • People's Computer Company: An early computer hobbyist organization with the ethos "Computers are for people."

Concepts, Tools, & Media

  • SimCity: The classic city-building simulation game.
  • Halt and Catch Fire: The AMC television series about the personal computer revolution in the 1980s and 90s.