Podcasts

Nathan Labenz | What are the Best-Case Scenarios for AI?

about the episode

What does a genuinely positive future with AI look like? While dystopian visions are common, the most valuable—and scarcest—resource we have is a concrete, hopeful vision for where we're headed.

In this episode, we're joined by Nathan Labenz, host of the popular Cognitive Revolution podcast, to explore the tangible possibilities of a beneficial AI-driven world. Nathan shares his insights on everything from the near-term transformations in education and healthcare—like AI-driven antibiotic discovery and personalized learning—to the grand, long-term visions of curing all diseases and becoming a multi-planetary species.

We dive deep into crucial concepts like Eric Drexler's "comprehensive AI services" as a model for safety through narrowness, the transformative power of self-driving cars, and how we can collectively raise our ambitions to build the future we actually want.

About Xhope scenario

Xhope scenario

No items found.

Transcript

Beatrice (00:00)

Okay. So I am very happy today to be joined by Nathan Labenz. You run the Cognitive Revolution podcast, and we were just talking briefly before starting this recording about how many episodes you've done. It's so many, so it's a bit intimidating to interview such a prolific podcast host. Feel free to direct me if you have any prompts.

Nathan (00:23)

Not at all. Well, thank you. I'm excited to be here and looking forward to this conversation. For my part, I'm basically just obsessed with AI and trying to understand it as well as I can. Of course, it's such a horizontal technology that it's touching all aspects of life and society, so there's a never-ending number of angles to come at it from. Eight episodes a month honestly isn't really enough to get after all the angles that I would like, but it's probably as many as anybody could reasonably produce. I certainly don't expect anybody to listen to all of them, but it's been a really fun learning journey for me. I don't consider myself to be very charismatic, in all honesty, so I mostly just pinch myself that anybody wants to listen to it at all. But I'm looking forward to this conversation here today with you.

Beatrice (01:07)

Yeah, well, today's angle is going to be the theme of this podcast: Existential Hope. I know you've said that the scarcest resource is a positive vision for the future, and I think that's what we're going to try to dig into today, especially in relation to AI, which I think is the most interesting question. So let's start. If you woke up 10 or 20 years from now, and we had a really good, positive future with AI, what would you see around you?

Nathan (01:37)

It's a hard question. I say that all the time—the scarcest resource is a positive vision for the future. I really do mean that, and I don't think that I am particularly advantaged in terms of having a crystal ball. Another one of my common jokes is that my crystal ball gets real foggy more than a few months out, so I'm in uncomfortable territory trying to see farther into the future than that. But here we go.

I think we are going to see just about everything change. The reason I called my podcast the Cognitive Revolution is by obvious analogy to the Industrial Revolution and the Agricultural Revolution. If you go back into the "before times" for these previous revolutions, what people were doing before versus after is just totally different. At some point, we were small bands of hunter-gatherers, always on the move, always searching for food, living literally hand-to-mouth.

Then we figured out how to grow food, which created a whole different reality and a much bigger scale at which people could live together. Those economies of scale created the beginning of the technological exponential that looked really flat for a long time but was seemingly already on an exponential curve even then. The same thing happened again with the Industrial Revolution. Mechanizing farming took us from a scenario where 80-90% of people were growing food to today, where I think only about 2% of people in the developed world are needed to grow food because machines can do so much of that work.

I think the same thing happens for cognitive labor. If AI were to stop progressing today, I think we already have powerful enough AI to automate the majority of cognitive work. We don't have everything wired up in the right way or all the data structured for AI to consume it, so there's a lot of implementation work that would need to be done to realize that dream of automation. That would take five to 10 years, and probably longer, because I always underestimate the timeline for implementation. But I think we have sufficiently powerful AI that we could automate a majority of cognitive work already.

And then the question is: well, what are we going to do? If people have moved from roving around to farming, then from farming to factory jobs, and then from factory jobs to white-collar cognitive jobs, what do people do if AIs can handle the majority of that? I don't know. One candidate answer is the caring economy broadly could be the next big thing. I think that's a little challenging because you see studies and survey results that often show people prefer talking to AIs for a lot of things. AI doctors, for example, tend to get higher ratings on bedside manner than human doctors because they have some unfair advantages. They can be infinitely patient; they aren't time-bound in the way human doctors are. So they'll answer all your questions and have no constraints on how much time they can spend with you.

Teaching is another area where, for as long as the world is at all recognizable, we will want humans to be role models for the next generation. But we already see schools where AIs are starting to be responsible for all the content, and humans are moving into a more guide, coach, and motivator role. The AI gives you the lessons and grades your homework, and the humans are just there for these softer skills. Is there enough demand for that to absorb all the people who will probably get displaced from their current jobs? I don't know, that seems like a stretch, and it's a rocky transition, but that's at least one answer.

Another answer is that we might actually have that life of leisure people have dreamed about. Famously, Keynes said 100 years ago that by today, we should be working 15-hour weeks. We're obviously not. But maybe that's another way things could go. Zuckerberg has had really interesting thoughts—and like many things in AI, I have very mixed feelings on what he's up to—but in his post introducing the personal superintelligence concept, one of the things he pointed out was the macro trend of people spending less time working and more time socializing, creating, and consuming media.

Maybe the big trend of the shift from work to leisure accelerates. Maybe people start to do a lot of stuff in VR and AR. Maybe Neuralink becomes a broad technology, and these experiences, especially if they're literally connected to your brain, could become extremely compelling. So maybe we're spending a lot of time in VR is one answer for the 10-years-from-now future.

Beyond that, it's really hard to say. Classically, people say, "Well, we didn't know what the cell phone would bring us. Nobody had Uber in mind when we introduced the iPhone." So what apps will be built on the AI technology foundation is very hard to say.

One thing I think could be really interesting is a sort of radical egalitarian mode of access to frontier technology. I often use the example of doctors. It's a scarce resource today. Not everybody can access a good doctor, but with AI, that should change. People should be able to get quality medical advice regardless of their means. That's exciting. I was reminded of the Andy Warhol quote where he goes on about how the great thing about American consumer culture in the 60s was that everybody can get the same stuff. The president drinks Coke, movie stars drink Coke, and you can get your own can of Coke and know that it's the same as theirs. Even if you were richer, you couldn't get a better Coke.

That's been true of the iPhone. I don't think it'll be true of AI in exactly the same way because there will probably be frontier, high-powered systems that not everybody needs. But if you think about that VR world, one of the big differences between the haves and have-nots right now is the experiences they can access. That could perhaps become collapsed, where the exciting life of adventure that is currently only available to a select few could be made scalable through some combination of Neuralink and VR, all delivered in a low-energy, low-resource way that could scale to everyone, the way Coca-Cola did.

I'm probably going to get a lot more wrong there than right, but those are at least some musings about just how different the future could be. And it's coming at us fast. The Industrial Revolution took 200 years. The electrification of the United States took 60 years, from 1880 to 1940. The huge difference there was they had to actually build out the wires to everybody's house. Now we already have the wires that deliver AI to the point of consumption. So you can have these centralized upgrades where, from one version to the next, the capability leap can flip much faster. It's going to be a wild ride—exciting and also a little bit scary.

Beatrice (09:35)

It's both a very exciting and probably challenging time to be a human coming up. I feel like there are a lot of threads to pull on. Thank you for being so concrete about these things. Is there anything personally that you would be just so excited for?

Nathan (09:53)

Well, I've dreamed of self-driving cars since I was a kid. I used to sit in the backseat with my mom or dad driving, and so often we'd be sitting at a red light and nobody was going the other way. That bothered me so much, even as a kid. It felt like if we had a little more collective will, that was probably solvable even without AI. You don't need AI to change the light when it's clear that nobody's coming, right? So that maybe will be a theme as we get into this: what is it going to take to be successful? A little more social and collective will to demand better, higher expectations from the public, I think is one thing that could be really critical to realizing the good future.

But now we've got self-driving, and it works. Waymo is amazing. I don't know if you've used one, but it's been a while since my last ride. It's fully autonomous—you summon it with the app, it shows up with nobody in there, you get in, and it drives you where you want to go. What was really striking to me was how quickly I got bored with it. I had been thinking about this for literally decades, but I found myself checking my phone five minutes into the ride and needed to intentionally remind myself, "Hey, you've been looking forward to this for a long time. Put your phone away and savor the first experience of a real self-driving car." But it was so good that it just felt like a background reality within minutes.

The safety data seems to suggest that it is already a lot safer than human drivers. The price point in the San Francisco Bay Area is quite a bit higher than Uber, so it seems people are willing to pay more for the self-driving experience, whether that's for safety or because they don't want to talk to the driver.

But yeah, that's a huge one. I dream of going to visit my grandmother who lives four hours away and being able to either work or ideally sleep on the way there, just doing an overnight trip in the car. You start to imagine the different form factors, too, right? If I truly don't have to pay attention, then you can have a very wide range of car types. You could have sleeper cars that you just get into, go to sleep, and wake up at your destination like a mobile hotel room. That alone would be an incredible improvement and should come with 30,000 fewer road deaths in the United States. I think there's a million road deaths annually across the world. It's going to take a while for that to be built out, but yeah, I've been waiting for that one for a long time.

Beatrice (12:30)

Yeah, I agree. The first time I went in a Waymo, I was also just like, "Wow." And we get used to it quickly. But it's one of those things that just feels like it makes sense. When I went in one for the first time, it just felt like, why aren't we already doing this? I recently did a special episode on autonomous vehicles and what we need to do to get them coming as soon as possible. Imagine on a Friday night, going to bed in your car and waking up Saturday morning out in nature. That would just be amazing.

So if we zoom out a bit, one thing with existential hope that's interesting is thinking about really big visions for the future. I gave you 10 to 20 years in the previous prompt. I'll give you a hundred years if you want, or even longer. Of course, your crystal ball gets foggy, but if you get to dream, what do you think would be a best-case scenario, especially in relation to AI?

Nathan (13:22)

Yeah, that's a tough one for sure. The fog of even just what's going on with AI right now is hard to penetrate. Even among people who are extremely informed, very knowledgeable, even titans of the field, there are just these super fundamental disagreements around what currently exists and what's going to happen in the immediate term. The farther you go out, it just gets radically difficult.

But I'm on board with the people who would hope that we would cure all the diseases. Things that were once totally fantastical now no longer seem so. With AI's ability to grok what is going on at many different levels of biology, the potential for us to actually hit something like a Kurzweilian escape velocity—where every year your life expectancy increases more than a year—no longer seems totally far-fetched.

Just in preparing for this, I was looking at a recent paper about the creation of novel antibiotics. We haven't had many new antibiotics created in a long time, but out of a single group at MIT, multiple new antibiotics with new mechanisms of action that are effective against antibiotic-resistant microbes were just discovered. It's an interesting sign of the times that something as big as that would have been all anybody could talk about if it had happened when I was a kid. And now, most people, even who are relatively plugged in, just haven't heard of it because there's so much else happening.

So, curing all the diseases still sounds a bit hubristic perhaps, but it does seem to be increasingly plausible. I would certainly like to live longer and healthier, which is obviously critical.

Becoming a multi-planetary species is also a great aspiration for humans. I think Elon has become hard to defend in some ways, but the general idea that we should aspire to get off of planet Earth and get out into space makes a ton of sense. It's really interesting to wonder, and I don't think we're going to have good answers on this for a while, but do we think that non-carbon forms of life, something truly different from us, could carry on our values, our consciousness, our intent? I really don't know. Do AIs have any moral value? Are they moral patients? Does their experience matter? I'm radically uncertain on those questions. It does seem like if you were to ask, "What is the best way for us to project ourselves into space?" getting away from the current form of our bodies would probably be a natural part of that design. But I'm really unsure if we should be confident that we could create something on a totally different substrate and feel like it matters in the same way that we are confident that we matter.

Anyway, there's a long time to figure out some of those details. I think the goal of getting out into space is very worthy. Can AIs help us get there, or do the AIs take the baton and actually go out and do that? There's this idea of the "worthy successor," which on one hand is a dangerous idea that we should not lean into without having these difficult questions answered. But if we did have those answers, and I really felt like I understood where consciousness comes from and believed that these things had it and were having positive experiences, then I could imagine they would be a worthy successor that might be a lot more suited to travel through space over great distances and lengths of time. But yeah, that's all pretty fuzzy stuff, I suppose.

Beatrice (18:23)

No, yeah, well, I mean, fuzzy but also concrete ideas, and I think they're all very interesting. I agree it's hard to be confident. But on the consciousness part, I feel like it's one of those things that, even if we obviously cannot be certain of it, it's such a "big if true" that it's worth thinking about a little bit already, just because of that.

And then to scale it back a little bit, zooming back to the here and now. Is there anything you think is maybe a bit underestimated in terms of near-term AI applications? Something that's maybe boring but transformative?

Nathan (19:03)

Yeah, I think the inference-time scaling paradigm. Folks like Dario Amodei and Sam Altman have been talking about this for a while, but it's hard for people to make the leap with them.

In the realm of "boring but transformative," the idea of the spreadsheet comes to mind. People used to sit there with big pieces of paper and a pencil, doing calculations and having to erase and fill things in again. Then you had the spreadsheet, where you make one change and it propagates through all the calculations in an auto-updating way. I think this idea of things auto-updating or running in the background for us could provide a ton of value in a world that is still mostly recognizable.

For one thing, imagine a second opinion for everything. I kind of live this way myself today. If I'm going to send an important message, or if I'm working on a deal and I get a contract, I'll take that contract, run it through three or four AIs, and say something like, "I'm this party, here's the previous communication, here's the contract I just got. What should I be concerned about?" The AI outputs are so much and so fast that sometimes I'll then take the three or four of those, put them into another window, and say, "Okay, give me a single comprehensive summary of all four of these to de-dupe the points." It's not about having the AI tell me what to do, but about having that second, third, or fourth check on everything I do. That just gives me the ability to move a lot faster with a lot more confidence and accuracy.

You could also see that in all sorts of matchmaking, whether it's economic, romantic, or even just getting together with friends. Why don't we hang out with friends every night? One reason is coordinating that stuff takes a lot of time. But the AIs can definitely do that sort of thing if we set them up to run in the background across a growing number of matchmaking problems. I think that will be really interesting—just greasing the wheels of commerce, greasing the wheels of dating markets. All these things are relatively high friction.

This is another example where getting to the good equilibrium is going to be a challenge. We are now seeing in hiring, one of the examples I always give to business owners is they should have an agent that is constantly searching for candidates they might want to proactively reach out to. Should you be doing more proactive recruiting? They all say yeah, ideally. But who has time for that? The AIs do. There is the question now of how we deal with all that volume on the receiving end. Companies are starting to report that it is getting harder to separate the real resumes from the AI resumes. There are interesting ideas, like maybe you should have to pay a dollar to apply to a job to limit the "spray and pray" approach.

I firmly believe that there's just a lot of value to be unlocked in matches that are not made and deals that are not struck just because people don't have the time. Some things are hard to do but easy to verify. It's hard to find the next customer or the engineer you want to hire. But when it's presented to you, you can often recognize it pretty quickly. If we can get AIs to propose good ideas to us that we can then quickly verify, there's a huge amount of value to be created by automating away that friction.

Beatrice (23:57)

Yeah, you could find the best friends or partners ever if an AI could scan everyone in the world for you.

Nathan (24:07)

I'm doing this a little bit in my family. My wife and I have three kids, and there's always the question of what we're going to do on the weekend. Can we get these kids out of the house? Three boys, and they go wild if they don't get out. So it has become a priority for me to find something to do with them every weekend. But the search for that is tough. Perplexity and ChatGPT are pretty good at, "Hey, what's happening in Detroit this weekend?" ChatGPT now even remembers my family and kind of knows what I've been interested in in the past. So it's really good at doing a much more comprehensive search than I would on all the little weekend festivals and this and that. We are actually getting out more as a result of the search costs for something interesting to do having dropped significantly.

Beatrice (25:13)

That's really interesting and a really concrete use case. I definitely trust it more with travel planning these days as well. Is there a specific sector that you think is more ripe for AI to be fully integrated and shape its trajectory? Like science, healthcare, education, or governance?

Nathan (25:49)

I mean, I think it's all of the above, really. It's just a question of timing on both the development and adoption sides. The canonical first answer is software engineering. That seems to be driven by the fact that the AI developers themselves are software engineers interested in solving their own problems. It's also driven by the fact that it's comparatively easy to validate the outputs of the models in software work because it's just text. The text gets compiled, it gets run, and you get an error message really quickly.

I just saw in the last 24 hours that Replit, a platform I love for building software, introduced their Agent V3. One of the big differences is that it now not only writes the code to build your application but then it will spin up a browser and become your QA agent. The thing goes through this full loop of create, actually try to use it itself, find the issues—not just "did it compile?" but from a user standpoint—and then comes back and tries to fix them. The tightness of that loop is a good heuristic for how quickly things will come to different industries.

If you compare that with, say, medicine, you've got a feedback loop that's a lot slower. I alluded to those antibiotics. You can't just run experiments easily. A big part of how they're developing antibiotics is with these in silico experimental setups. You can generate huge numbers of candidate molecules, then run a simulation to see if it binds and would work. They were able to get a pretty high hit rate out of the in silico experiments. But it's still going to have to go through a clinical trial process, and the ultimate feedback of "fewer people are dying from bacterial diseases" is going to take a while. So a lot of things will be rate-limited by those kinds of bottlenecks.

However, these bottlenecks aren't without workarounds. You'll have these simulation environments. That's also happening a lot in self-driving. They don't just train on actual data; they augment that data and create all sorts of scenarios they may have never encountered in the wild.

The bottlenecks can be social too, like in education. Right now, there's never been a better time to be a motivated learner. If you turn on the ChatGPT teach and learn mode, go into voice mode, and give it access to your screen, that's by far the best way for me to learn about biology. The biology part has so much background knowledge and so many terms. When the AI can look over your shoulder and you can just casually say, "Hey, what's this?" or "Why does this even matter?" it's incredible. Truly, there's never been a better time to be a motivated learner. The bottleneck then maybe becomes: do we have a system designed to create and encourage motivated learners?

Beatrice (31:52)

Yeah, for sure. I didn't even know that there was a teach and learn mode on ChatGPT. That's great.

Nathan (31:58)

It's relatively new. I'm hoping to get the product manager for that onto the podcast to talk about it more. Khan Academy was an early pioneer of this, and I did an episode not too long ago with the founder of a school system called Alpha School. They do academics in two hours in the morning, and the afternoon is entirely enrichment—projects, field trips, group work, whatever. The AI is entirely responsible for delivering the content. They're not soft on academics; you still have to learn all the same stuff from the US core curriculum. Their kids are scoring super high on the same exact tests that all the other kids are taking, but they're doing it in two hours. The adults at the school are now playing these other roles of mentor, coach, guide, etc. Two hours a day seems to be enough for what traditional classroom delivery—the "sage on the stage"—takes much longer to do. My kids are a little young for this now, but I think they will have a radically different educational experience than I did, that's for sure.

Beatrice (33:45)

Yeah, I heard about Alpha School; that seems amazing. I also wanted to comment on the antibiotics thing; that was something I had completely missed, and I'm very interested in this space, so that's amazing. It also reminds me that this podcast, the Existential Hope podcast, is part of the Foresight Institute, which was co-founded by Eric Drexler and Christine Peterson. In his old book Nanosystems from the early 90s, Eric Drexler wrote about what he called the "design space," foremost thinking about molecular machines and what we could be doing. I think that in silico work is really promising and hopefully the bridge between the world of bits and the world of atoms. So hopefully, we'll see a lot more of that soon.

I wanted to go towards the Drexler idea because I know another thing you've mentioned a few times on your podcast is his idea of "comprehensive AI services." It's like this web of more specialized AIs rather than general agents. Do you have any takes on comprehensive AI services? And I'd be curious to hear your thoughts on how that compares to something like a tool AI approach or a scientist AI. How do you think about these sorts of approaches? How do you think they differ, and which ones do you think are most promising?

Nathan (35:21)

Yeah, it's a great question. I love the comprehensive AI services vision. One thing I've observed is that anything in pure form is dangerous. That could be sugar purified out of a naturally occurring food, or cocaine purified out of coca leaves. In the Andes, people chew coca leaves their whole lives and it's not a problem. But you purify it to cocaine, and you're immediately dealing with something pretty dangerous. I generally think that what seems to be stable in nature is some sort of ecology, some sort of buffered system.

So the idea of a singleton, or some superintelligence that can do everything and is way more powerful than everything else, that to me doesn't feel stable. I don't like the idea of a superintelligence that is better than all humanity at every task because I have no idea how we would control such a thing. I have a hard time imagining that we could be in a stable, enduring equilibrium with such a thing for a long time.

So I tend to prefer the idea of a more competitive, interactive, buffered system. And for me, that's at the core of this comprehensive AI services idea. It's safety through narrowness. It's not to say that the AIs aren't really good at what they do—they could be superhuman at what they do—but in the same way that we have superhuman chess players that can only play chess, and we have superhuman protein folding AIs that can only fold protein, you don't really have to worry that that's going to do something surprising. It might be surprising locally within its domain, but it's going to stay in its lane. I think that would be a really good design decision if we could manage it: to have AIs that are potentially superhuman in their domain but are in a pretty fundamental way limited to their domain so they don't do an end-run around whatever guardrails we've put in place.

I wouldn't say that the comprehensive AI services idea solves all problems. People still have good questions. There's this idea of "gradual disempowerment," the "intelligence curse," and the "abundance trap." They all seem to be getting at this idea that if AIs are doing everything, what are we going to do? What incentive will entities like governments or corporations have to invest in people if we're not needed to do economically required work?

One other take on this is it makes me quite uncomfortable that the plan at the frontier AI developers seems to be to get AIs to be able to do AI research as soon as possible. Use that to accelerate AI research, which is already going super fast. They're like, "Well, we've got 500 really good researchers here at DeepMind or OpenAI. But if we had AIs that could perform at that level, we could have 5 million." And I'm kind of like, "Oh God, that seems like a recipe for potentially creating something super powerful, but also losing control over what it is we're creating."

The comprehensive AI services idea sounds like a slower developing plan. I think that would probably be good, but I'm not sure how we get from the trajectory we're on, where multiple people are credibly approaching that tipping point where AIs start to do AI research. How do we get to a comprehensive AI services vision that could come online in the timeframe before some of these Manhattan Project-style things fly a little too close to the sun? That's maybe the toughest question for me on that vision, and that's maybe where we do need some sort of regulation.

Beatrice (44:10)

Yeah. We recently did a little world-building exercise of what it would look like to have a "tool AI" future, which I think is kind of the same as comprehensive AI services. We were thinking about what it would look like to have AI that's mainly focused on being a tool, so it's limited in some of its agenticness and generality, but very useful to us. We thought the main thing that could potentially put us on that trajectory would be some sort of legal or insurance-driven mechanism, as insurance companies probably don't want to cover systems that are too opaque or that no one is liable for.

Nathan (45:05)

Yeah, I like that. I'm going to do a podcast before too long—I actually just made a very small personal investment in an AI underwriting company, and they are trying to realize that vision. It would be extra nice if there was a mandated insurance requirement, because one thing companies could do today is just not buy any insurance. So yeah, if we required insurance and brought that whole mechanism of trying to model out and price risk, some things might be uninsurable, and if they're uninsurable, maybe they can't happen. I think that could be really good.

Beatrice (45:52)

Well, great to hear that someone is already working on it. One thing I also wanted to talk to you about is that you're one of not that many people trying to balance taking seriously both the transformative positive opportunities of AI and also the risks. I'm curious to hear what you think it's like to balance that tension and how you keep yourself from sliding too much into one or the other.

Nathan (46:21)

It honestly comes very naturally to me, and I feel like the updates we get on a regular basis require that. I don't really know how I could have any other worldview than this classically ambivalent one—super excited by the upside with a healthy fear of the downside. You just see these eureka moments. I have one presentation I call "Eureka Moments, Bad Behavior." You see these antibiotic discoveries, or the "virtual lab" from a Stanford group where a human gives a problem to an AI, and the AI spins up its own other agents to design new treatments for novel strains of the COVID virus. That's amazing, right? How can you not be super excited about that?

But then the next post I scroll through on Twitter will be like, "Deception is on the rise in the latest models," and we're starting to see these scheming behaviors. How do you look at something that has that power but also reflects back to us some of our worst tendencies and not feel these dual-track feelings?

I just did an episode with Joe Hudson, who is an executive coach to a bunch of people at various AI companies, including Sam Altman. I asked him a similar question, and he said that in his experience, everyone at the frontier companies has this mindset. What you see on Twitter is a little misleading because you've got booster accounts and doomer accounts. His view was everybody he's ever met at these frontier companies has that dual-track mindset. They're all seriously grappling with the ramifications of their work. I think that's encouraging. However, I also asked him if he ever expected that we'll see an AI developer stand down because they feel they can't go any further in a responsible way. His answer was no. He said they're problem solvers and will just look at that as another problem to solve.

Beatrice (50:01)

I agree with you. You put it very simply, that it comes naturally to you, and when I think about it, I think it comes naturally to me as well. However, on Twitter, that's not really the impression you get. It feels like what we're trying to offer in this podcast is thinking about the positive trajectories because the negative ones are very easy to envision. Do you have any thoughts on why that is and what you think is missing from the discourse that could help us aim better towards more positive futures?

Nathan (50:56)

It's hard. Eliezer Yudkowsky has this famous idea, which I think goes back to Vernor Vinge, that if you could predict what a smarter thing would do, then you would be as smart as the thing. If it's genuinely smarter than you, one of the ways that presents itself is that you can't predict what it's going to do. Eliezer then adds to that, but you can predict that you will lose to it in a competition. You can't predict the moves of a superhuman chess player, but you can predict that you will lose to it.

I think it's just pretty hard to envision, and we should probably expect the future to be quite weird and surprising. What can we do about it? I do think higher standards would help. In politics, we have all these culture war preoccupations. I've often wondered, why is nobody running on a super pragmatic idea of, "Here are five ways we're going to make daily life better for everybody"? On my list, self-driving cars would be one of those things.

Tyler Cowen famously said that one of the most high-impact things you can do is help individuals raise their personal level of ambition. Maybe there's a society equivalent of that. Can we help society raise its level of expectations? "Where is my flying car?" But for real, where is it? Why has our leadership lost all connection to making material life better? You almost never hear about how you're going to make a more prosperous future that can give more to everyone. We've kind of lost track of that. I don't know why, but if people were encouraged en masse to demand better, that seems like it could really help.

Beatrice (54:31)

That's good news that you think they're on track to happen. Did you see the sci-fi question? If you could rewrite the sci-fi canon around AI, what would you do?

Nathan (55:28)

Yeah, I do confess I'm not as well-read as many thinkers in this area. Obviously, more positive visions go without saying. Maybe more branching scenarios would be interesting. The AI 2027 scenario that recently made waves was notable because it had multiple endings.

Maybe one twist on helping society demand more is to make it somehow clearer to people that this is up to us. We get to decide what we're going to do. France built a bunch of nuclear power plants and now has relatively abundant nuclear energy that's not contributing to the carbon problem. We could have had that, and we don't. So showing these hinge points in history where things could go one way or another, and showing how different the future ends up being based on certain decisions, maybe that could help people adopt a more possibility-oriented mindset.

There's a way in which people, even in the midst of history, relate to it as a viewer, a passive consumer of content. Is there any way to create content that could snap people out of that mindset? AI could really enable this. The ability to create forking scenarios is expensive right now, but AI could perhaps make it economical. What if you had to sit there in your Netflix show and make decisions, and then get a world that was meaningfully influenced by the decisions you made? Could we teach people through an entertainment medium that there is real consequence to decisions and that your agency really matters?

Beatrice (58:39)

Yeah, that's a really interesting point, the branching scenarios. Didn't Black Mirror do something like that a few years ago? But yeah, I agree, it'll probably be coming. We're at time basically, so wrapping up, I wanted to shift gears and pick your brain on podcasting. After all the episodes you've done, what are the main lessons you're learning? Do you have any recommendations for someone hosting a podcast?

Nathan (59:58)

Not really, to be honest. I sometimes call myself the Forrest Gump of AI. What I mean by that is I'm just stumbling my way through and often find myself in interesting places, usually as an extra in notable events, but I haven't been that strategic about it. The main thing I try to do is, with apologies to Tyler Cowen again, have the conversation I want to have.

The way the podcast started for me was my friend was starting a podcast network, and he said, "Hey, all you do is talk my ear off about AI. Why don't we record a couple of these?" I was like, "I don't know how to do that." And he was like, "We'll take care of everything for you. All you have to do is talk." So I tried it. The mindset that I went into it with was, I just want to learn as much as possible. If I can get people to teach me stuff and have interesting conversations, and nobody listens but I get value from it, that alone could be a win.

So I really went into it with an attitude of, if I'm having conversations that I want to have and I'm learning from them, that's enough for me to be happy with how I'm spending my time. Anything else was just gravy. I'm not in a position where I'm forced to think too much about what episodes do well and what the numbers look like. I try to stay true to the original idea of just wanting to learn as much as possible, and the rest has all been letting the chips fall where they may.

Beatrice (1:02:47)

You know, to some extent that feels like great advice because it's encouraging to hear that you can just do it for the joy of it and for things you're curious about.

Nathan (1:02:58)

I think Joe Rogan would describe himself pretty similarly. For years, he was just shooting the shit with his comedian buddies and then it blew up. But he did a lot of episodes before he became huge, and mostly he was just getting high and having fun, I think.

Beatrice (1:03:17)

That's true. Same with Tim Ferriss. It's a wide range. And with you, I guess it's a wide range within a narrow topic, but like you say, it's a very broad technology. But yeah, I think that's all we have time for, Nathan. Thank you so much for coming. It was really, really nice to chat with you about all of this.

Nathan (1:03:38)

Thanks so much for the invitation. This has been really fun.

Read

RECOMMENDED READING

People

Organizations & Companies

Concepts, Research & Media