In this episode of the Existential Hope Podcast, Niklas Berild Lundblad, a philosopher, researcher, and former policy lead at Google DeepMind, Google, and Stripe, explores the interplay between progress, complexity, and the transformative potential of artificial intelligence.
Niklas discusses why asking the right questions is crucial for navigating our future, especially as AI challenges our self-perception and introduces new forms of complexity. He discusses the "soft narcissism" in AI development, the distinction between AI and AGI, and why we should view current AI not as a mirror, but as a strange, exotic artifact whose full capabilities we are still underestimating.
In this conversation, we explore:
- The critical relationship between progress and complexity, and why managing this dynamic is essential for societal growth (including the "Red Queen effect").
- Why current AI developments feel different from past tech hypes.
- The potential for AI to revolutionize scientific discovery.
- How AI could accelerate its own diffusion.
- The need for curious regulators, mechanisms for change, the challenges of agentic AI, and how cultural biases might affect our approaches to regulation.
- The Solow Paradox and the Gartner Hype Cycle as frameworks for understanding technology adoption.
"I think curiosity organized in a way that allows us all to partake in curiosity and explore the limits of our knowledge and mankind's knowledge is all I hope for. If we can get that and we can continue to be curious at ever more powerful levels in a sense, then I think that will be really, really interesting. And that's my favorite version of the future."
Nicklas Lundblad is a multi-disciplinary thinker, researcher, and advisor working at the intersection of philosophy, technology, law, and policy. With a distinguished career in the tech industry, he has held senior roles at Google, Stripe, and most recently, Google DeepMind, where he was involved in founding the Frontier Model Forum to help shape industry best practices.
Imagen is a Google Gemini GenAI tool.
"I think curiosity organized in a way that allows us all to partake in curiosity and explore the limits of our knowledge and mankind's knowledge is all I hope for. If we can get that and we can continue to be curious at ever more powerful levels in a sense, then I think that will be really, really interesting. And that's my favorite version of the future."
Beatrice Erkers: I'm very happy to be here today with Nicklas Lundblad. Thank you for taking the time, Nicklas. We've had some issues getting the recording working, but hopefully this time we'll get it right. I want to introduce you using, I think, your own very descriptive Twitter bio where you've tried to capture the broadness of your interests and career. So it's: Philosophy, technology, law, policy, and reading, researcher, investor, and advisor. You spent time at Google DeepMind most recently, but also previously at Stripe and Google. So, I was very excited to have you on the podcast because you have this very broad range of interests, and I think most of them align very well with this Existential Hope program.
So, let's start. One of the questions that I wanted to ask you is a bit like, why are we doing this interview? Because you wrote a book about the importance of asking questions and just that it actually matters that we're curious and ask a lot of questions as humans. So you've said that asking the right question is the most powerful thing that we can do. So I'd be interested to hear, what's one question that maybe you've been asking yourself lately? And also just tell us, why is it so important to ask questions? How do we ask the right questions?
Nicklas Berild Lundblad: I think questions are a core part of knowledge. We used to think that knowledge was a list of things we knew – that the earth is round or that body temperature is 37 degrees, etc. But what you really know is a series of answers to questions. What's the approximate geographical shape of the earth? What's the average normal body temperature that a human being has? And then those answers fit up with the questions. If you ever want to get to new knowledge, you have to change the questions. And I think one of the core things we have to do is to grow our curiosity and be curious about the world. So questions then become a key instrument in working in that.
When Fredrik Stjernberg and I wrote the book – he is a professor of philosophy at Linköping University – we did it because we couldn't find any good books about questions: the philosophy of questions, the biology and evolution of questions, questions in culture. And we were sort of flabbergasted because we thought that questions are one of the things that set us apart from a lot of other things in nature and also one of the things that we're surprisingly good at, even compared to some of the artificial intelligence models that we're building. I think asking questions is super important. And as to what question I'm asking myself, I guess the question I'm really interested in is: how can we combine progress and complexity? Because all of the progress we do generates new complexity that requires new progress. And to me, the combination of progress and complexity, the way we deal with complexity, is absolutely essential to how we grow as a society.
Beatrice Erkers: Yeah, I think that's obviously a great place to dive in. I think that's really a core question that I'm personally very interested in as well. Because I think with this program, the Existential Hope program, we're very, I would say, pro-technology, to put it simply. Like we think that a better future is very likely to have a lot of exciting... be like a high-tech society. But it is also quite obvious that every new technology brings about its challenges. And I heard someone refer to it as "the blast radius of new technologies is bigger today than it was." So, through your decades of thinking about this, do you have any main takeaway? Is there a standard operating procedure when we introduce new technologies into our society?
Nicklas Berild Lundblad: There isn't a standard operating procedure. We do this in really different ways. Often you can think about new technologies... one way of modeling this is to say that introducing a new technology is increasing complexity in society. Typically what we do is that we have social mechanisms, institutions for example, that absorb complexity in different ways. They help us regulate the technology, understand, deploy, use the technology in different ways. One of the responses to increased complexity in society is changes and shifts in the way that we institutionally think together or make decisions together. So I think that's how we typically do this, but it's rarely a planned process because technology doesn't come in increments. Technological leaps are not smooth, but they're often lumpy in the sense that you sort of get a lot of technology. And then, even if I don't like the "blast radius" metaphor, I think you could say that technologies come in clusters. So one technology enables others, and you see this phenomenon with general-purpose technologies that then create a whole second wave, which means that rather than having a smooth technological evolution where you introduce a new technology every hundred years, you introduce the new technology and then you have a Cambrian explosion of applications of that technology. This means that there are so many different use cases and changing social patterns and economic models, et cetera, that you have to deal with. And that's where the institutions come in. And institutions are designed to be slow because inertia is a value in an institution. But at those points of inflection, when we have technological change, that value almost contradicts our need to adapt in certain senses.
Beatrice Erkers: And do you think that it's always the same questions that arise with new technologies? Because I'm more anecdotally... I've been talking to people who have been following technological progress for the last few decades and some are saying that, oh, it's just like with the AI developments now – the ones that I think are most top of mind for me and maybe most other people right now are thinking about these things – they're like, "Oh, it's just the same every time. It's the hype. People are scared and then it always ends up not being as big as was expected." But then I did talk to Christine Peterson, who is the co-founder of the Foresight Institute, and she was there for the nanotech hype in the 80s. And she said that it's different now with the AI hype, that the progress is just much more feasible. And yeah, do you have any takes on this? Is it the same?
Nicklas Berild Lundblad: I don't think it's necessarily the same. I mean, I remember when the internet came and there was a hype around the internet and we all had to adapt to the internet. That was very different from what we're seeing now. And there's one core reason for it being different, I think, and rightly identified by Christine, is that this technology is much about us. It's very much about us. It's sort of close to who we think we are. It is, in some cases, we've almost built it as a mirror.
One thing I come back to is that I think that underlying the whole project of artificial intelligence is this soft narcissism. We want to build something that is a little bit like us so we can mirror ourselves in the technology and think how great we are because we managed to build something like us because we are the crown of evolution. And so there's this sort of really soft narcissism that goes into the project. And I think that also challenges us in a very different way than the Internet did. The internet was a communications technology. Suddenly we could talk to the whole world. There were a lot of challenges in that. Information monopolies that had, to that point, been owned by nation-states, for example, broke apart with tons of different consequences until they were reconsolidated in 2015 and after. But artificial intelligence asks us questions about who we are, not who we talk to. That's a very different kind of question.
So that's why I think you can see more angst, more anxiety around artificial intelligence than you see around many other technologies. And perhaps not wrongly so, because this is a technology that does challenge our image of ourselves if we're not careful. Now, I think it's a mistake to think about artificial intelligence as a mirror, because I think what we're building is very different from us. It's better to think about artificial intelligence as a very strange, exotic artifact that you found in the woods and now you're trying to figure out what it does, and you don't know where it comes from and you don't know what it can do. But you need to sort of approach it really from scratch and try to learn about it. Because if you come to artificial intelligence thinking this is a thing like me, you're going to fall into all kinds of cognitive traps. You're going to fall into the Dennett trap, which is essentially confusing competence for comprehension, thinking that if something is really good at something, it also understands what it does. Understanding is different from competence in these artifacts, and we know that because we have tested them in different ways. All of those different cognitive traps are just there waiting for luring us in if we believe that this technology is more like us than it actually is. This is something different. It's stranger and weirder and more exotic than most people think because most people think that we're building a mirror.
Beatrice Erkers: Yeah. When you think about AI and AI developments yourself, what do you think about the AI versus AGI thing? So for you, is it like a specific capability threshold or something else? Because I feel like oftentimes people talk about it, but to me, it's not defined in my head of what I would say that actually is AGI, whereas some people seem to be very... you know, the singularity. It's like something that has a clear shape in their mind. Do you have any takes on this?
Nicklas Berild Lundblad: There's no single definition. There are plenty of different definitions. One definition is economic, saying that when an artificial intelligence model can do all of the economically relevant things or some percentage of the economically relevant things that human beings can do, then we have AGI. There are some people who say that, no, it's actually a version of the Turing test that whenever something has simulated us well enough, et cetera, et cetera. So tons of different versions of that.
I like the question that Demis recently asked in an interview that was given just at I/O, which we sort of just passed through, where Google, my former employer, had this developer conference and he was interviewed on stage. One of the things he was asked was exactly, "What's AGI?" And he said, "There's no one definition, but the thing I'm interested in," he said, "is what is the cognitive potential of the human brain? What kinds of things and problems can it solve?" If you just look at the architecture, if you don't look at where we are now, but if you look at the human brain and you imagine the class of possible problems that the human brain could successfully solve in different ways, then I think you get a much more interesting question, which is not about artificial general intelligence being sort of like humans, but actually being a question that is more abstract, which is: what kinds of problems can a general intelligence solve?
If you think about the class of all challenges or problems that we are facing, you can ask yourself, "Okay, this is a huge search space. Human brains have an ability to search this space in certain ways. What about this other thing that we're building? Can that search the solution space in a new way?" If you do that, then you might say artificial general intelligence is actually going to be much broader than human intelligence because it's going to be able to find entirely different solutions in this space.
And there is an analog here that I find quite interesting, actually. It's like, if you think about how these machines play chess or Go, there's something that we discovered in how they play that's really interesting. And that is that if you seed the games with human knowledge, they end up playing a lot like human beings, although better. They can beat you. But if you remove human knowledge entirely from these models, they start to play exceedingly well and entirely in a non-human way. So there's plenty of evidence from chess masters and Go masters who have said when they play these machines that it felt like playing an alien because it was playing in such a different way. It was doing things unheard of in human chess playing or Go playing. And this tells us something. It tells us that if the space of possible chess games is a fitness landscape with peaks and valleys, et cetera, then we have climbed a very local peak here. And we're super happy at the top of that local peak. That's where we become chess masters. We're really good. But what we don't know is that there's a global peak somewhere that's much, much higher in this landscape. And when you let a general intelligence loose in a fitness landscape like this, it can find the global peaks. And I think almost all human knowledge is probably like this. All human science, all human knowledge is probably a local peak in the giant knowledge fitness landscape, where we're very happy sitting on the top of our local peak and we don't know that there are plenty of global peaks out there. So AGI for me is the ability to search the fitness landscape for global peaks when it comes to universal knowledge about the world.
Beatrice Erkers: Yeah, we're so defined by our own format to some extent, or the way that we can perceive light or the world. I like that, that's very interesting. Do you have any hot takes on AGI timelines or something like that?
Nicklas Berild Lundblad: No, I don't do timelines. I decided a long time ago that I don't do timelines. I think we should act as if it's close. I think there is a heuristic here, which is if you think that there is something quite transformative coming down the pike, you should act as if it's close because that makes you better prepared. And so I think the heuristic is: think that we're five years away, five to 10 years away, because then you will be better prepared with institutions, regulations, frameworks, and even with your own mental attitude.
Beatrice Erkers: So just out of curiosity, why don't you do timelines? And then how should we think about this new wave of AI coming today? Like how should we, for example, plan to govern it?
Nicklas Berild Lundblad: Well, the first thing, the reason I don't do timelines is that historical evidence suggests that I'm bad at it. So it's like, I don't do things I'm bad at. I try to stop that. Let's not do the things I'm worse at.
And on how we should prepare, I think what we need to do is that we need to build mechanisms for change. So one way to do this, of course, and there are some people who argue that we should do this, is to slow everything down and to make sure that we stop or pause or do something like that. Viable? You can have a reasonable disagreement about whether or not you can do that. Probably the good guys will, the bad guys won't, so that's a problem. We need to figure out what that looks like. But I really don't want that. I want mechanisms for change. So I would like for our institutions to set up learning observatories. How is our education going to change if we have access to unlimited cognition? If cognition, the ability to think, becomes a bit like electricity, like a utility? Say you have "cognicity." So whenever you need a little bit of thinking, you can just tap into it and you can inject that into an institution. How would education change? What can we do differently? What are the core kinds of things that we still want to do as human beings? And what can we delegate to our extended phenotype in different ways? I think that's a really interesting question.
I think for policing, for defense, for all of these different institutions, we should have the same kind of ongoing dialogues, structured learning observatories, thinking about what are we going to do and how should we change. So the first thing we do is we build these adaptive, almost listening mechanisms that allow us to understand the change that's coming. So that's really important.
And the second thing I think is that we should try to understand how existing legislation applies. There is plenty of legislation that doesn't just go out the window because you suddenly do something with an AI. And that's a discussion to be had with lawyers and other legal professionals to say, "Okay, how would you judge this particular case if a self-driving car drives into someone? Whose fault is it?" And there are existing rules around liability that are quite strong where we can think this through and we can talk about what is the right solution here. And I think that's something that is also lacking.
And then I think the third thing that I would say is there are some cases where I think we also need special legislation, where it's reasonable to have, for example, safety testing in place. We've been advocating for safety institutes. We think it's important to have safety standards. We think it's important to publish model cards, et cetera, as an industry. I was a part of founding the Frontier Model Forum. And the reason we founded that when I was still at DeepMind was that we believe that the industry has the responsibility to share best practices and different kinds of standards, and that in itself can inform others about how the field is growing. So there are plenty of things that we need to sort of get in order if we believe that this technology is, or even if we just use the heuristic, this technology could be coming soon.
Beatrice Erkers: That's really interesting. What do you think about this... one thing that I've been talking to people about lately is this, it's more of a grayscale approach to some extent because, you know, there is that pause or full speed ahead discussion. But one sort of grayscale approach would be a tool AI approach of focusing on advancing narrow AI and applying it in all the domains in society that we want help from AGI with anyway, like healthcare or coordination. And we could just see potentially enormous benefits from narrow AI systems, basically. But do you think that we would be potentially missing out on more grand outcomes from AGI? And in that case, it's better to push for AGI than something like this? Or yeah, what do you think about a tool AI approach?
Nicklas Berild Lundblad: It's a good question. I mean, I am in some degree content with letting the market solve that, because I think that there is demand for the kind of AI that you described that's more reliable, for example. I think it was Andrew Ng who said at some point that for the next five to 10 years, most money in AI will be made from supervised learning, which is probably not untrue, right? There are simple technologies, simple methods that can be used in different ways, and you don't need to sort of go for full AGI if what you want to do is to have a better supply chain optimization system. So I think there's going to be demand for that. I think a lot of those solutions are going to be built. I think we see that happening today too, the demand for deployment, because that's also the way that we get productivity growth from AI in the first wave.
And then I think there are going to be people and companies and organizations that still want to explore AGI. I don't think we have a full roadmap yet. We don't know how to get there from here. So essentially that is just saying yes to research. And I think that is a really good idea because that research is not just leading us closer to AGI, which might be a good outcome – and you can be agnostic about that – but it's leading us closer to more energy-efficient AI, to more data-efficient AI, to more secure AI as we're building towards AGI. So AGI as a research program is giving us all of these good things that can then trickle down into the narrow tool AI that you described. So I think they are inherently connected in different ways. And so to me, the way that we divide our resources across the general and the narrow program is going to be determined in large degree by market pressures. And I think it's also really hard to change that. I think it would be really hard to say that, for example, you want to ban research in AGI, but you're quite fine with narrow AI. How do you know that the narrow AIs can't be combined? In fact, if you think about the way that things are evolving now in architectures, you have this notion of a mixture of experts where you have modular models that have experts that are narrow, but they combine to become AGI. So say you build your tool AI and then suddenly when you have five of them, you're like, "No, we got AGI. What do we do now?" So conceptually, I think it's really difficult to see how you can build one and not the other.
Beatrice Erkers: Yeah, I think there's a lot of truth in that and similar to your not being good at predicting timelines, I think we're all not going to be good at predicting the exact outcomes of all of this. It would be interesting to zoom in on some more concrete applications of just how AI could be helping us. I heard you in another podcast where you talked about how we could see AI in healthcare, for example. Do you have any, like when you think about the most sort of immediate, useful, hopeful cases, what do you get most excited about?
Nicklas Berild Lundblad: So I think healthcare is interesting and I do think we're going to see a lot of healthcare applications. If you just scan what's being published right now in academic articles, I think there's a ton of stuff being published around healthcare, which I think is really interesting. Education and energy – I think those three sectors all stand ready to be possibly transformed by this kind of technology.
So when it comes to energy, to take the simplest one, I think it's an optimization problem. Now, if you can optimize energy flows in different ways, I think this will be really good. Energy is kind of interesting though, because I also think energy, when we talk about emerging technologies writ large, needs to change in terms of how we produce it and in terms of how we architect the network topologies of energy. Currently, we have a few really large centers that produce energy and send it everywhere. So we have a lot of loss of energy in the network in different ways. And I think that we need different kinds of energy networks and we need different kinds of energy transmission as well. So you can imagine that the one way to think about the energy networks that we have today is that they are a lot like the telephone networks. A friend of mine, Jonas Birgersson, recently launched a company where he essentially said, "No, I'm going to be the Cisco of the energy network," building routers that can predict and work with different kinds of energy demand, decentralizing the energy network into something that would be open for local production of solar power or battery power or whatever you will. So I think the energy network needs to both change and optimize. And an interesting question is if AI optimization of the energy networks actually might slow down the necessary network transformation. So there are all these really interesting trade-offs in that sector. I think it's one that's understudied, not least because we're going to need a lot more energy for AI in the future.
The other one that I think... so healthcare I think is also really interesting because the ability that we have to better understand patients, if we just record conversations with them, transcribe them, and put them into databases, is enormous. If you also have some sentiment analysis, tonality analysis, you can hear when a patient is nervous or you can hear if there's something in their voice. There's so much rich predictive data that we can get out of the patient interaction that's currently being lost. And if you then think that you also could aggregate that in different ways, so you get what would be essentially a health radar. So if you know that the flu is in this particular part of London this week, you can allocate vaccines to elderly people that are close by so that their chances of dealing with the flu go up. That kind of allocation problem can also be solved really interestingly with AI. So I think that health is one where we're going to see quite a lot of change as well.
Education is trickier because we have to figure out what it is we actually want education to lead to, what are educational outcomes that we believe are good. And if we believe that education is something that primarily gives us internal knowledge that we can refer to and that becomes a part of us and how we become mature, real, interactive human beings, then we have one view of education. If we think education is skills-based and should essentially give you a set of skills, then we have another. If we think it's a way of turning you into a citizen, then we have a third. So education is, I think, one of the most interesting areas to think about when it comes to how AI could transform and how it should not transform certain parts of it.
So all these three will be really important, but the one I'm most excited about, the one that sort of really gives me hope to talk about the name of your podcast, is science. I think the ability to use artificial intelligence to find new scientific insights is incredibly exciting. Because if we manage to do that, if we could accelerate the pace of science so that you get a hundred years of expected scientific progress, but in the next 10 years, I think that's the best bet we have of addressing the complexity question. We can get continued progress if we can use science to address the complexity that progress creates. So this is true for finding new drugs, finding new materials, finding new mathematical physical insights. You can essentially imagine a world in which science is automated and there's constantly a set of large data centers with, I think "a nation of scientists" was what Dario Amodei wrote in one of his essays, that work on these problems.
And ultimately, what I would even... there are two perspectives here. One is let's build a better scientist, which is an interesting perspective. Because then if you think about what a scientist is, it's somebody who is conducting science within the framework of a theory. And that's valuable, but the theories are the ramifications and the conceptual frameworks for what that scientist can do. Now, there are two kinds of discovery processes in the world. One is the scientific one, where you start with theory and you build that out with hypotheses and experiments and all of those kinds of things. That's brought us a lot of value. What we possibly now have the chance to do is to build atheoretical science.
So, atheoretical science already exists. Nature invented it. It's called evolution. Evolution doesn't care about what physics theory you have. Evolution will use what's there in order to build advantage. That's why we find now, many, many, many years later, that there are quantum phenomena that have been utilized by evolution in biological systems. So, for example, the ability of birds to navigate or the ability of plants to turn solar power into chlorophyll, etc. All of those actually have, in quantum biology, been detected to have certain quantum effects that evolution just used because it was there. Evolution is completely atheoretical. It just finds things that work, mechanisms that give capabilities that allow us to do things and develop our repertoire of options when we interact with the world. So I think actually one really interesting way of thinking about the future of science is that why don't we build atheoretical science that just explores what's there? That would require doing something really cool, like saying, "Okay, we're going to build massive sensor networks." And you can build however many sensor networks you want, and they can measure any number of physical or biological qualities. And we're just going to download those enormous datasets, and we're going to let the AI find patterns in it. And when it finds patterns that are useful – it doesn't have to have a theory – patterns that are useful to accomplish something, then it's going to document them and we're going to turn them into machines.
But at that point, something interesting happens. At that point, we sort of lose visibility into science. It becomes opaque. It becomes very hard to explain. And so the question here, at some point, we could be in a situation where we have to choose scientific progress or scientific understanding. And that's kind of cool because it's one of those things that were predicted by the science fiction author Stanisław Lem in his crazy 1960s book, Summa Technologiae, where he writes about a world in which we'll start to harvest scientific insight, just like we harvest wheat. We don't understand wheat much. We don't know how it sort of grows. All those different things are kind of opaque to us. We just know wheat is really good for us. So maybe we're going to be able to harvest science in much the same way, because we have atheoretical science systems that are constantly finding things that work without being able to explain to us why they really work. So imagine a humanity – this is the science fiction story that you could write – a humanity that has an enormous amount of capabilities, but when visited by aliens has no ability to explain why we can do what we can do. Or imagine, on the other hand, humanity finally finding that extraterrestrial civilization that is super advanced and we're hoping to learn from them. And they go like, "We don't know. This stuff just works. It's just been working for the last thousands and thousands of years. Nobody really understands it, but it's pretty neat." And so maybe there is an explanation horizon to the complexity of science, beyond which we're just going to enjoy the capabilities, skills, and progress that we can get from science, but we're not going to be involved in it.
Beatrice Erkers: Hopefully AI can explain it to us or something that we can understand.
Nicklas Berild Lundblad: Yes, no, and there's a lot of work going on with that. I mean, it's very speculative, but I do think that it's interesting to sort of consider because there are so many things that can change fundamentally here.
Beatrice Erkers: Yeah, I do feel like I have to read this sci-fi story.
Nicklas Berild Lundblad: Yeah, no, the book is fantastic and he has a much better name for AI too. Stanisław Lem, in the 1960s, he writes about AI, black box problems, all the things that we sort of thought we discovered, he wrote about back then and he called it "Intellectronics." So much cooler. I mean, if I could say I've been working on Intellectronics for the last 20 years, that would be much more fun. Yeah.
Beatrice Erkers: Much cooler, yeah. I'll make sure to link that with the episode. Do you think that, for example, I interviewed David Baker, who was awarded the Nobel Prize along with Demis and your colleagues last week, and he was saying that maybe there's a tendency to over-update on the success of AlphaFold and his work in how helpful AI can be for science, because we do need such good data and such good benchmarks. And there are very few things that we have that on, but maybe that's just currently. But you think that we're not over-updating on it basically.
Nicklas Berild Lundblad: Well, I think he has a definite point because it does require a lot of data and structure to the data. And AlphaFold specifically and the work that he has done as well really requires that you have a sense of how to measure progress in the protein folding problem. You're not going to have that in all other sciences. But I do think that we have a lot of data and can create and collect a lot of data that will help us make progress in science quite fast. So my sense would be yes, we're over-updating if we think that we're starting from present-day infrastructure and datasets. Absolutely. If we just take what we have today, it's going to be a bit tricky. But I don't think we're over-updating, given the potential we have for collecting new datasets, building out new kinds of scientific domains. Then I think we're right to be quite enthusiastic about where this can go.
Beatrice Erkers: Yeah, I hope so. A little bit of a jump, but well, it's kind of the same, I guess. One thing, another thing that I heard you talk about in another podcast – I listen to a lot of podcasts with you now, which has been lovely – is just the diffusion of technology into society. I think it's such an interesting thing to think about. And usually it takes some time. You know, we have a new technology and it actually takes a lot of time. Arguably, I was at a conference a few months ago and there were people from the global south there and they were talking about how they don't have phones or they don't have phones that are good enough to access ChatGPT on or something. So arguably, obviously the future doesn't come distributed evenly. But I remember you mentioned that maybe this is a bit different with AI because AI could help diffuse itself or it can explain to us how to use it. Do you think that AI can help accelerate its own proliferation?
Nicklas Berild Lundblad: Well, I think there's a possibility it can. So there are two theories, right? One is that technology always diffuses through a civilization at the same pace and it always diffuses with social inequalities built in, which means that the global south gets everything last. And if you want to pick a base case, that's probably your best pick because that's how it's been in a large set of cases historically.
Now, the thing that I think is interesting to consider is that AI is a bit different from electricity or the internet. And the reason it's different is that it can actually teach you how to use it. Just take a basic example. Friends of mine who are not necessarily deep in AI sometimes ask me how should I use the technology? And I always tell them the same thing: ask it, because it's going to have a better answer than I do. And I had a really funny episode with an old friend of mine recently where I sort of walked him through how to do all this stuff. And I said, "Remember to change the system prompt. You want the system prompt in the system to essentially have the last line be, 'Tell me something that you haven't seen me do before with you that will surprise me that I can do with you now.'" So if your system prompt always does that, every time you use the technology, you become better at using the technology and you realize what it actually can do.
One of the things that I think is sort of a modern tragedy of the commons is that we have this fantastic technology and I would guess that most of us who use it use it in a way that is far, far underpowered from what it can do because we underestimate it. We don't push hard enough. The first piece of advice I give to people is: ask it, ask it what it can do. And the second is: be unreasonable, be completely unreasonable when you interact with the technology. Demand things that you absolutely think you cannot get and you will be surprised from time to time. And so that's something that's really interesting. And so he then emailed me a little bit later saying, "I really thought that was interesting. Could you help me, tell me how should I change the system prompt?" I emailed back to him and was like, "Ask it." And he was like, "Oh yeah, right." So, and then he did and he managed to change it.
And that's what I think is so interesting. I think this technology could accelerate its own diffusion. It could be deployed in a really good way once we start using it to understand where it's best deployed and in what order we can deploy it. Now there will be some hallucinations. There will be some things that we don't want to do with this when it's suggested we deploy it here or there, but it could be self-diffusing. If you remember the internet, back in 1996, I was a part of a small educational firm that did courses in internet. So we taught people to "surf the web," which sounds... I'm just dating myself, you know, when dinosaurs roamed the earth. But we really did have these courses and we taught people to use search with Booleans and all those different things. And you had to do that for people to get utility out of the internet. Now you can just tell anyone any age, "Ask it, talk to it, see what you can do." And that means that the diffusion factor, the education factor of the diffusion equation looks different. Is it enough for this to diffuse faster, to diffuse more equitably? I don't know. But I hope that it could be. And I think on the equitable part, we still need infrastructure, we still need electricity, all of those things still need to be there. So we shouldn't be too Pollyannaish about it. But maybe if somebody somewhere thinking about this, trying to figure out how to best deal with this in the global South starts to ask the machine, "What is the best thing I can do?" we will get entirely new solutions that we haven't thought of before. And that can help us do this in a good way.
Beatrice Erkers: Yeah, yeah, it's a good prompt in general: ask it. I'll also try to do this asking it to teach me how to use it in new ways, actually. That's a good idea.
Nicklas Berild Lundblad: Every time you use it. I can't say enough how great that is. And it really, the kinds of things it suggests, it's quite amazing. And then I say, "Yes, please do it." And it does it. And I go like, "What?" And I just think if I was to estimate, like, okay, we have this capability in the machine. That is X, right? What is the average estimation of X by the average user? Is it 90% of X that we think it can do? Is it 80%? I think it's below 10% of what it actually can do that people think it can do. I think it's 10%. We estimate that this is 10% as capable as it actually is, if you think about the entire capability. I think that in itself is also really interesting because that latent capability can be unlocked in bursts and explosions when people suddenly realize, "Wow, I could do this with it." And I think that's something that might also make diffusion look very different than from railways or the internet.
Beatrice Erkers: Yeah, I ended up signing up to a Udemy course teaching me to do prompt engineering, but this seems much better and I think I would actually use more of it also.
Nicklas Berild Lundblad: Yes, and use more than one model. That's the other tip. Always use more than one model. Then you can ask one model what it thinks about the other. And you can also find that they really dislike some things that they do. So I use different models. One of the models, I use the code. And sometimes I check the code in a second model. And the second model really doesn't like the way the first model codes. So it's like, "Oh, this is so bad." And "Oh, you're declaring..." Oh, sorry. It's really quite funny.
Beatrice Erkers: You have a few new friends with different styles and vibes.
Nicklas Berild Lundblad: Yeah, there's a whole story there actually about the fact that they do have styles. So one of the experiments that I've been working on, one of the things I've been doing – I do research at the Technical University of Munich as a senior fellow of practice, and I'm also connected to a couple of other different institutions – and one of the things I'm interested in is colors, colors and large language models. So if you ask a large language model for what it actually associates with different colors, and you ask it a thousand times and you do an evaluation that essentially looks at the colors and looks at different kinds of associations that they have – and you have to design the prompts in different ways – but if you probe it on its association with the color red, you get very different association clusters from different models. And you also get very different association stability across different models. So some of them will come up, if you ask them a hundred times, out of a hundred times, they will a hundred times have these words if you just ask it for 10 words that it associates with black. Others will be a little bit all over the place. Some words only appear in some models. So it's kind of interesting actually, because I think, as you said, it has its own tastes and likes. It's really correct. I think there are things built into these systems that make them idiosyncratic.
Beatrice Erkers: Which is probably a good prompt in general that it would be... I was talking to someone who was building LLMs in African languages, for example. So it is probably interesting also in terms of just like, there should probably be more cultural diversity in terms of the models.
Nicklas Berild Lundblad: Oh, and you just gave me a really interesting idea. Yes, that's really smart. I think actually it would be really interesting to see how it associates different colors with different words in different languages. If the association is more narrow and stable in some languages than others. That would also be really cool. Okay, I now know what I'm going to spend my afternoon doing. This is... yes, I am a philosopher at heart, so I do weird things.
Beatrice Erkers: Perfect. I want to ask you to have a little school with us also because now, as I mentioned, I've been listening to a lot of podcasts with you and I picked up some good technology philosophy concepts from you that you've referenced, and I thought they were quite useful. So I thought you could explain them to us. The first one is the Gartner hype cycle. Do you want to just share what that is and when it appears?
Nicklas Berild Lundblad: Of course. The Gartner hype cycle is essentially a description of how technology enters the public mind and the markets. And it starts with an early sort of hype when everyone thinks this is fantastic and early adopters come in and they say, "This is the best technology ever." You see a lot of media mentions, et cetera, et cetera. And then after a while, the hype closes down and you end up in the "trough of disillusionment," I think is a lovely name for it. It's where people go like, "This technology wasn't that great anyway, and didn't do what we thought it would do. And we thought it would do all these things and it didn't turn out to be that useful." And then after a while, you realize that, "Well, it's actually kind of useful anyway." And you come up to this "plateau of productivity." So you have the hype, you have the trough of disillusionment, and the plateau of productivity. And I think it's fair to say that many technologies go through this. You could see it with the internet, you could see it with other places. And some technologies never come out of the trough of disillusionment. They just turned out to be pure hype. And some of them do. So if you go back to circa 1999, when we talked a lot about the Gartner hype cycle, you had this enormous hype around push technology, which was the idea that instead of just browsing and pulling, you would have a little device on your desktop and things would be pushed to it automatically. And that was so fantastic in different ways. And push technology was like a million... trillion dollar business, all these different analysts thought that push... and now we don't know what it is even, we don't know what it means to talk about push technology. On the other hand, the internet itself has turned out to be pretty productive and there were hype cycles also back in 1999 around things like XML that also turned out to be quite productive. So that's sort of roughly the Gartner hype cycle.
Beatrice Erkers: That's great. I'm going to try to keep it short, so I'll push forward. But just if you had to answer quickly, where, if anywhere on the Gartner hype cycle do you think we are now in relation to AI?
Nicklas Berild Lundblad: I think we're in the trough of disillusionment heading towards a plateau of productivity. But it's very different for different people. People are in different places on the Hype Cycle.
Beatrice Erkers: That's true, that's true. Another one is the Solow paradox. Do you want to explain?
Nicklas Berild Lundblad: Yes. So the Solow paradox is from Robert Solow, Nobel Prize winner in economics. And what he noted, I think this was back in the 80s, was that you can see computers everywhere except in the productivity statistics. So the Solow paradox is: we invest all this money in computers, yet we see no productivity growth. And that has been a huge problem in economics, because why don't we see productivity growth from computers? And there are a couple of different possible explanations for that, including that we're measuring the wrong kinds of things. When we measure GDP, which is how we think about productivity growth, we miss things like Wikipedia because it's worth zero in GDP and it doesn't actually have a real value. And so the question is, how do we value that? How do we value the fact that we now have access to a trillion trillion photos? Those kinds of things don't show up in productivity statistics as they were traditionally conceived. And then there's another one, which is, well, it takes time for technology to push through. And you can see a little bit of productivity growth from computers, but 20 years later, that kind of thing. But the Solow paradox is essentially about: if technology is so great, why aren't we richer?
Beatrice Erkers: Perfect. And the last one in this little school is the Red Queen effect, which I also think is very interesting.
Nicklas Berild Lundblad: Yes, the Red Queen effect is actually from biology originally. But it's a really interesting phenomenon in biology where if you think about evolution and adaptation and you say that evolution essentially means that organisms adapt better and better to their environment, then as evolution goes on and runs for a long, long time, you should expect to see fewer and fewer species extinct, because everyone is getting better and better adapted to the environment. So people should be around. The extinction patterns should balance out and extinction rates should go down. Looking at the fossil record, however, we see that's absolutely not the case. Extinction rates are constant. That's really, really weird. So why is that? The answer turns out that you're not just adapting to the environment, you're adapting to all of those other bastards that are adapting to the environment as well. So you have to constantly compete with others in order to adapt, which means that it takes, as the Red Queen says, all of the running you can do to remain in the very same place. You're always going to compete on two axes: others and the environment. So that means that it takes the same effort to compete over time.
Now, the reason we use it in technology is that it's connected to what we started talking about, complexity, where we have this situation and we've chosen to place ourselves in, where we produce welfare and progress with the help of technology and innovation. But as we do so, we also produce more complexity, and that complexity we have to deal with, which means we need more technological innovation in order to continue to produce more welfare and progress, because otherwise the complexity will bog us down. When we produce more complexity than we can mitigate, then we end up in a situation that Joseph Tainter, who is a philosopher and anthropologist and historian, has described as "rapid simplification." Rapid simplification is essentially just collapse. So he's written a book called The Collapse of Complex Societies, where he argues that societies that can't mitigate the complexity they produce collapse. So the Red Queen phenomenon, when we think about it in AI or in technology generally, is this fact that even if we're really proud of the innovation we've had historically, we need at least as much innovation going forward to deal with the complexity that we bought when we bought progress and welfare. So it takes all of the running we can do to remain in the very same place in terms of keeping our improvement.
Beatrice Erkers: Great. Thank you for doing that. So I would love to sort of just before we finish, I'd like to sort of go back a little bit to you, because we touched a bit on this tension of the benefits versus risks of technology and AI. And I know that that's also something that you've talked a little bit about, like that we need to be curious regulators. And I'd be curious to get your take on how to navigate the landscape right now in terms of AI governance basically, or AI policy. Because, you know, it feels to me like if you go to the US, there's oftentimes this sense of hostility towards any type of regulation almost, is my experience, or it's definitely more common there – not saying everyone is like that. And in Europe, maybe there's a tendency to overregulate or to be a bit too early on the ball sometimes when it comes to technological regulation. So what do you think that when regulation or policy is at its best... can it be a space of innovation and not just risk management? What does it look like when it's at its best kind of?
Nicklas Berild Lundblad: I think the description you give is interesting. Our standard model of regulation today is that the US is, you know, "Go ahead, do what you want, laissez-faire." Europe is "overregulation and turning Europe into a regulatory museum essentially." And China is, "This is a military-civil project where we're going to do what is best for the party." That's sort of the standard model that people have. I think it's more complex than that. I would actually offer that the US has, I think, more than a thousand now different state-level regulations. So you see an enormous amount of regulation happening on the state level in the US, which means that if you just count the number of regulations, or if you just count the number of lines of regulation, you would find more in the US than you find in Europe, which is kind of counterintuitive. Most people don't think that's true.
And in Europe, I think one of the things that European regulation has always been dependent on is implementation. So how is this implemented through the member states? So what do the member states actually come up with? And what is the way in which the interpretation of the regulation is put into the market? And that's where we are right now with the negotiation of the code of conduct and other things with the AI office in the European Union. And they actually have a chance or a shot at making this better in the implementation phase. So maybe they don't have to regulate themselves into a large open-air museum.
And if you look at the Chinese, the level of control that the Chinese party has over the local Chinese companies is probably a little bit less than is usually assumed. And I think that there's a ton of entrepreneurship being poured into AI at this point. And the party is stepping back because it realized that the earlier clampdown on the technology sector was not entirely successful from a political perspective. I think there is... you want to move one step away from the standard model, I think that's a useful model to have in your head.
And when can regulation be helpful, to sort of get to your question? I think regulation is helpful when it creates clarity and predictability for people who work with things that are inherently uncertain. So what I want regulation to do is to tell me, "I can do this, I cannot do that." And if that is the case, and I know that's stable for a long time, and the balance is not unreasonable, then I am good. Then I can work within those restraints. I also want regulation to be symmetric and not asymmetric. If you regulate an American company, you should also regulate the European company in the same way. I think that regulation that regulates smaller companies, larger companies, American companies, European companies differently, creates weird effects and they're not necessarily that helpful. And then the third thing is I think regulation should aim at being parsimonious. So figure out what the key problems are and then try to solve those key problems and don't go for every eventuality.
I kind of like regulation for new technologies to also leave some to implementation because I think the more you can leave to implementation and guidelines in the case of emerging technologies, the better it is because they are emerging. So if you think, for example, about the problem with trying to overregulate detailed technology, you have an interesting case study in the European AI Act where a lot of focus was on general-purpose AI models. And what are they, how should we think about them, what should we do, all those different things. And part of the reason was that the treatment of the act in the parliament came as ChatGPT was launched. The legislation had the misfortune of being highly public in the sense that everyone was paying attention to exactly the kinds of questions that were in the AI Act. That now almost seems quaint as we look at the Code of Conduct, as we look at the AI Act, because what's now really happening, and it's much more interesting, is agentic AI. So how do you think about agents, the delegation of capabilities to agents, the orchestration of agents, that kind of thing? And maybe if you had had more general legislation focused on risk and what the commission tried to originally do in the draft of the AI Act, that would have been better tailored to deal with shifting paradigms in technology than the current regulations.
Beatrice Erkers: That's actually a really good point that you're bringing up. I've heard this year be referred to as "the year of agents" a few times. What's your take on what procedures we should have in place or think about now for that?
Nicklas Berild Lundblad: I think we have in legislation already thought about the agent problem in a lot of different places. So for example, if you're in a company, you can be a representative of that company with certain powers, and those powers are the only ones with which you can bind the company. So the idea of a role description that has a legal mandate is something that is already in legislation. Same thing if you're a parent and you have a child, you have a responsibility for what the child does. The child can't enter into contracts on its own. People have to ask you for it. If you are a pet owner, you have absolute responsibility for if your dog bites someone. Those kinds of... we have principal-agent relationships throughout legislation in different ways. I think thinking through how we want these to work now is really interesting. And you may not even need legislation. Maybe you need standard template contracting. So imagine an institution says, "Here are the five different kinds of agent contracts." And an agent has to declare at the outcome when it meets you what kind of contract it's acting under. And you can even imagine agents modifying these contracts themselves. I wrote a post on "Unpredictable Patterns," my Substack, recently about the notion of Alexa Gentia, where agents can work out this kind of legal structure themselves. And it may not even be human-readable. It can be something entirely new, because we always think that our way of contracting and our regulation is a peak in the fitness landscape, but regulation is just like chess. Most of the contracts and most of the legal rules we have are probably local peaks. So you can think about different ways of implementing this.
So my sense is that maybe that's what we should try to do first, but be really clear about it. If you want to require anything, require that an agent upfront states the contractual framework within which it acts, for example. So I think that would be really interesting. Then I think we should also be mindful in how we think about the agents. There are all kinds of really interesting conceptual philosophical traps when you think about agents. One of them is that the simplest description of an agent is that it's a loop. It's something that loops all the time. And then who actually gets to set the condition for when the loop quits or where the loop runs, that becomes the regulatory problem. There are some people who talk about autonomous agents, which is a misnomer, I think, because if you're autonomous, you decide what to do. And autonomy is something that human beings have, but computers do not have yet. It's like when people talk about autonomous driving. If you get into an autonomous driving car, it will decide where to go, not you. The market for those cars is understandably quite scant. So you want to be really careful with the notion of autonomy and agents.
And that goes back on something that's really interesting and actually a scientific or research problem that is understudied as well. And that is we don't really know what agency is or how to build it. If you think about intelligence, we've started with artificial intelligence and now we're working our way back to artificial agency. Evolution did it exactly the other way around. Evolution started with agency. And the reason agency appears in evolution is that evolution needs a way – and I'm personalizing evolution now and all biologists will hate me for it, but I'm just doing it for explanatory purposes, so I've said it – but the reason evolution comes up with agency is that it needs to solve sub-generational selection problems. So evolution can solve things by allowing the fit people to survive and procreate and the not-so-fit organisms to not do so, but it needs a way to go below that time scale and also create the ability for individuals to make decisions. If you're evolution, you design this bug and it has this beautiful camouflage and it never moves from light surfaces to dark surfaces where the camouflage works, then the camouflage has absolutely zero selection value. If you also give agency to this bug so it can move to the darker surfaces where the camouflage works, the camouflage starts to have selection value and you've expanded the amount of things that can have selection value massively.
So agency then turns on agency. So if you have the ability to want things, you can then want what you want. And when that loop continues, you end up in Douglas Hofstadter's strange loop, which is consciousness and intelligence. So agency directed towards agency in loops upon loops turns into intelligence, turns into consciousness. That's sort of one explanatory model of how consciousness comes around. There are plenty of others. And I have a weak preference for this one because it connects agency and intelligence in a really good way. Now, the reality is that we have started with intelligence and trying to build that and then move back towards agency. But we haven't really built artificial agency because if we build artificial agency, then we're going to build stuff that wants stuff, not stuff that wants what we want. That's what an agent is currently understood to be. If you build a true agent with real agency, then it's going to want stuff on its own. It's going to be what David Krakauer, the director of the Santa Fe Institute, refers to as teleonomic matter – matter that wants stuff. So we are that, we're stuff that wants stuff. Do we want to introduce into the world other stuff that wants stuff that we have built? That's one of the cool questions we have to answer in the coming 10 years.
Beatrice Erkers: Yeah, I guess there's also the question of whether that's emergent or not. Maybe we unintentionally create it. I'd be curious to hear also just your thought on how maybe our cultural biases affect how we interact with these sort of governance or regulation questions. Just a personal reflection as a fellow Swede, I think my experience has always been that it was never something that I thought of that I wouldn't trust that the regulations that our government makes would be not the right ones or something like that. And then when I spent more time in the US, and I think especially in the Bay, there's obviously this more complicated relationship. Do you think that, for example, we have this cultural bias of trusting authorities and institutions more and that's useful in some cases and not so useful in other cases? And how do you think that in relation to how we govern AI overall or what paths we may choose here that this could affect the outcomes?
Nicklas Berild Lundblad: I had not thought that through, but it's an interesting question to explore. I think one of the questions that we usually think about is what does the social capital in a market mean for the kinds of things you can regulate and how you regulate them. And social capital is a way of describing how much trust there is in a market. So for example, in Sweden, there's a ton of trust, which means that a company like Klarna can emerge in Sweden because the only thing you have to do is give them your personal identification number and then you're on the hook for payments, and then you can immediately go into buy-now-pay-later services and nobody will contest that. Very few people are going to go like, "That wasn't me." Now in lower trust markets, if you were to try to do this in, say, one of the lower trust South American countries, it wouldn't work. That kind of business model is impossible because the regulatory framework is not there, and the trust is not there, and the social capital is not there. So you're a low social capital market.
And so I do think that there's a difference here between the markets where there is high trust in government and the markets where it's not. I guess that it could be that the way it will manifest is that where there's high trust for government, we will accept AI in government much faster. So for example, currently Swedes do their taxes by looking at whatever the state sends them and then sending an SMS back and saying, "Yeah, looks right," which is high trust. And so if indeed they were contacted by an AI agent saying, "Hey, this is your tax, are you okay with it?" They would probably mostly go, "Okay, that sounds good." So those kinds of government applications might come earlier in high trust societies than in low trust societies.
Generally, when it comes to high trust societies, I think there's a challenge that when the trust breaks, it's really hard to rebuild it. Whereas in a low trust society, you really have to build the trust over time. And if you build it, then you can have something that's kind of robust, I suppose. But I had not thought of it. So I think it's an interesting question. I think it's worth thinking through if high trust societies will see different kinds of regulation when it comes to AI, maybe faster diffusion in healthcare. Maybe that's one. Maybe in more personalized settings like... Well, there's also the question, I guess this is interesting, because you could say that in low trust societies, you would get higher adoption faster of things like therapy, because you don't want to tell anything to a therapist who could tell it to the police or to somebody else. So AI therapists may actually become more popular in low trust societies. That's a weak hypothesis, but it's kind of interesting to think through how that would play out.
Beatrice Erkers: Yeah, yeah, I think some use cases are interesting to think about in that way. So my last question will just be, you know, this is the Existential Hope podcast. If we manage to do things right and everything goes well, do you have maybe a favorite vision of the future or something that gives you hope for the future of humanity?
Nicklas Berild Lundblad: I think curiosity organized in a way that allows us all to partake in curiosity and explore the limits of our knowledge and mankind's knowledge is all I hope for. If we can get that and we can continue to be curious at ever more powerful levels in a sense, then I think that will be really, really interesting. And that's my favorite version of the future. I'm very much sort of a... In that particular respect, I guess I'm sort of a Star Trek... I'd like to go to different places and explore next frontiers. It would be nice if we weren't killing each other and there weren't wars and all those things. But I think curiosity is good to focus on because curiosity over time leads to improvements in the human condition.
Beatrice Erkers: Yeah, I think definitely Star Trek still holds up and I hope that it's true that it also leads to these improvements. Yeah, that's it. Thank you so much, Niklas. It was really nice.
Nicklas Berild Lundblad: Well, thanks for having me. I really appreciate that the conversation was great.
Beatrice Erkers: Thank you.