Most AI futures give us two options: mass unemployment, or a government handout to soften the blow. But what if there's a third option, one centered on completely new categories of creative work that don't yet exist, where people get paid for contributing to AI rather than replaced by it?
In this episode, we talk with Jaron Lanier, pioneer of virtual reality and scientist at Microsoft Research. He proposes a radically different way of thinking about AI, and unpacks its consequences from AI safety to the future of the economy.
We touch on:
[00:00:00] Jaron: With a lot of these AI futures, they're all or nothing. Either the AI will kill us all or keep us as pets. There's not a lot of in between there. Right now the idea is that there'll be more and more classes of people who will be obsolete, and there'll be only a small or perhaps zero percentage of people who are actually needed for anything.
Let me present an alternate future. What if in the future there can be new classes of creative people instead of new classes of obsolete people? What if there's an exponentially expanding sphere of creativity in the future that we can't foresee, that people are proud of, that people are motivated to participate in?
What about that future? That's also a future available to us.
[00:00:51] Beatrice: You've been in the tech world for... all your life, over 40 years, I think. What has changed about the, this world that you've existed in?
[00:01:03] Jaron: A lot has changed, not in ways that were not predicted. In a way, this is what many people expected, and some hoped for and some were concerned about.
But one change, of course, is that we've taken over — that what we call big tech companies have become, in a way, pseudo world governments, of enormous power and by far the greatest sources of wealth and influence in many domains. Not to... There still is some importance to the energy sector and all of that, but in a sense the pattern of the world is created by tech, even when it's not directly the tech economy.
And I think tech has taken over culture and psychology and society and politics, especially for younger people. Another thing is that tech has its own ideology that's almost like a religion. Now, originally when I was quite young, there was already the prototype of this ideology. My main mentor when I was young was probably Marvin Minsky, an AI scientist at MIT, who probably did more to author the current ideas about AI and the approach to technology in the future, probably more than anyone else, really.
But at that time, it was from the position of being a rebellious radical thinker. So the idea was that these were incredibly shocking ideas, and talking about them had the sensibility of being an avant-garde artist or something. Now, at the time I disagreed with him and I used to argue with him a lot, so I've been arguing against what's now normal for a very long time.
But at any rate, there's a mainstream sensibility now that people will soon be obsolete, that AI is a thing separate from people, not made of people, that it's a new alien intelligence, and so on. There's also another idea, which is we've known for a very long time that information systems could control human society.
For instance, in 1950, one of the first computer scientists, Norbert Wiener, wrote a book called The Human Use of Human Beings that was about that — about how you could use computer systems to influence people and society and essentially control them. So this has been known for a long time, but it's actually happening, and those who benefit by being at the center of it cannot see anything wrong with it in general.
There's a kind of a blindness. But... And I don't think it's all bad. I think there are some things, like we have to not... I don't know how to put this. Becoming exclusively negative is a form of self-blindness just as much as becoming exclusively positive. So when... But it's a struggle now, because one of the issues with digital systems becoming so prominent is that they make it more difficult to perceive reality.
At every turn we're finding it increasingly harder to know what's true and real, and that's very deeply concerning. So that's a summary. That's a summary. Yeah.
[00:04:20] Beatrice: When you say that not everything has been bad, do you have some of the examples that are things that you have been excited to see, or are excited to see?
[00:04:29] Jaron: Oh, yeah. Sure. No, I can list plenty of things that I view as positive. So when I was a boy, my mom died in a car accident after having survived the Holocaust, as an Austrian. And so when I was starting in tech, one of my fondest hopes was that cars would drive themselves, because people are just not good at it.
We have something like a million deaths a year from car accidents. So in a sense, if you wanna save lives, you can save more lives by improving transportation safety than by ending wars. It's absolutely extraordinary how bad it is. And so but here we have this mix of good and bad. So right now I have a car that's driving itself that I think is safer than I am, and I'm very happy about that.
But that car comes from Tesla, and so whenever I put money into it, I'm supporting essentially a single-person company that's completely destroyed all the traditional checks and balances and participation of a corporation, where the board and the shareholders no longer have power, and that person is making himself insane and attempting to destroy humanity and the Earth so far as we can tell.
So I feel driving the Tesla improves my safety while doing harm to the planet's safety as a whole. And that's very much representative of what tech is like these days, where there's a lot of really good examples of benefits in the small picture, and yet probably in the bigger picture, humanity would be better off if all computers evaporated immediately.
And that's a horrible thing for me to say, 'cause I'm a computer scientist. But for the moment, it would appear to be true. I can give many other examples of this. There are a lot of things. And I'm a beneficiary. Let's be clear. I'm a Silicon Valley person. I'm a top scientist at one of the tech titans, although I have a special right to criticize it, which is unusual in my industry.
But I benefit. I'm on the good side of this transition, and yet I think like everyone else, I feel uneasy. I really worry about what legacy we're leaving. I worry about whether we're destroying our own sanity, and when the most influential and powerful people are doing harm to their own sanity, of course you have a problem.
And I'm very concerned that's the pattern we're in. But there's no question that the technology can deliver on many promises. The real question is, at the biggest level, are we using the ability of the technology to change the behavior and thinking of people in a way that makes us insane so that the whole project becomes a giant species suicide operation? That is the important question.
[00:07:14] Beatrice: Yeah. I agree, and I feel like with this podcast, a lot of what we're trying to do is to map out the ways in which technology can have a positive impact on the world, 'cause there is so much potential, but it's very rarely honestly being realized, at least to as much as it could be.
And obviously we have this, as people talk about it, upcoming revolution in terms of artificial intelligence, and I know that this is something that you've obviously spoken a bit about. And as I understand it, you don't really like the term to begin with — artificial intelligence. Am I correct in understanding that?
[00:07:51] Jaron: Yeah. I mentioned Norbert Wiener before, who used the term cybernetics, and he was the person who put forward the most robust early warning about how computers could make human civilization insane to the point of absolute destruction, and that was this book, The Human Use of Human Beings, and other works of his.
And the idea of cybernetics is that you have to look at the whole system of the computer and the world, and how the computer can steer the world, which then steers the computer, and how that whole cycle can go out of control or be beneficial. But he felt that if you just looked at the computer as a thing in itself, you missed the picture of what it actually was as a machine.
And remember earlier, the Turing–von Neumann concept of the computer was as this abstract box by itself that had a distinct input, and then runtime and an output, which is a perfectly fine formalism for doing mathematics. But in fact, they stay connected, and there's never a time when they're off and they're... it's part of a... The computer is actually connected to the world and changes the world. And in a sense, Norbert Wiener had a more relativistic or measurement theory-based approach to computing, which I think is the correct one.
At any rate, the term artificial intelligence was actually invented in the late 1950s by a group of people who were concerned about how much prestige and influence Wiener was getting, and wanted to come up with a marketing front, and they included my dear friend and mentor, Marvin.
And artificial intelligence was actually a coinage to unify this group that was seeking funding and prestige in rebellion against those people. And in this concept of artificial intelligence, you declare the computer a box in its own right. You give it its own reality 'cause it's intelligent, rather than just being part of this interactive flow.
So it, in a way, it's a step backward. It's a way of losing the ability to understand the effect of the computer. So we tend to think... a great example of this is we'll say, "Oh, we'll have the AI generate all this slop," but then, "Oh, wait a second. Maybe the AI will train on the slop, and everything will get weirder and weirder. Now we have to correct for that." And we end up tying ourselves in knots. That's just one small example of the constant problem of thinking this way — of elevating the computer to a thing in itself instead of a thing that people are doing.
And then in terms of human psychology, society, economics, thinking in the AI way is both false and terrible. Sort of evil, I'd say. You can totally think of these programs as a large language model in a completely different way. You can think of them as being like a giant Wikipedia with everybody's data put together in the way that Wikipedia does. You can think of it as a giant collaboration, 'cause ultimately that's what it is.
Ultimately, all an AI is — if you wanna think about it this way, of course you can think about anything in a lot of ways — but you absolutely have the option to think of AI not as a thing in itself, but rather as a collaboration of the people whose actions and data are combined to create it. And if you think of it that way, all of a sudden this idea of human worth in the future, human value, human roles, it opens up because you see that the AI is just a box of people, so people...
You can totally think that way, and all of a sudden instead of this incredible gloom that we're projecting, especially on young people — "You're gonna be obsolete. You can be kept as a pet, but you have no value" — you get rid of that messaging because your total value, you're the only value in this other conception, which is equally valid.
You lose nothing in terms of technical framing if you think that way, and you gain everything in society, economics, spirituality, everything. It's so much better, but we have this ideology where we wanna think we're making God. It's all about ego and vanity, and but it makes us stupid. It makes us insane.
Look at the leaders of AI. Just look at their characters. And I knew these people from before they went crazy. They've... I'm sorry, I'm ranting. I'm not letting you talk, but let me just say one other thing. There's a great... Back in the era of prohibition, when everybody was making illegal alcohol in the US, there was this saying, "Don't drink your own whiskey," because the whiskey was poisonous. You only sell it. You don't take it. And during the crack epidemic, there was a famous rap song about a crack dealer and about the ten commandments for dealing crack, and one of them was, "Don't get high on your own supply."
Like, we're selling this insanity-making combination of ideology and technology, and then we're applying it to ourselves. And if you show me a major leader in AI whose personality has not degraded in recent years, you can't. You can't. Yeah. And we have a problem.
[00:12:56] Beatrice: And so what, when... 'Cause I guess the normal critique that you would hear from some people would be, what about if the AI becomes recursively self-improving or something like that? Is the case that you're making that we're not gonna get there, basically, so that's not—
[00:13:11] Jaron: You're falling in the trap.
Oh.
That's totally the wrong way to think. Yeah. To think about what AI is, or what it can do, the moment you think that way, you're lost. You've already given in, and your mind is owned by the idiots. Stop. Back up. Don't ask that question. Let me ask a different question.
Can improved human collaboration feed back on itself and become better and better? Does the fact that we learned to talk hundreds of thousands of years ago, or whenever it was, did that make us not just have one-step improvement but improvement after improvement? That we learned to write, did it fold back on itself and lead to us being able to do more and more? That we learned to print, that we learned to broadcast, that we learned to digitally edit — did any of these things make only one-step improvement, or did they help humanity collaborate in a way that compounded and improved?
If you think of AI as a form of collaboration, it's absolutely reasonable to think that, yes, we ought to be able to use this not just for a single step change, but rather for an ever-improving cycle, and that's great. But it's our collaboration. It's us. AI does nothing. AI is a pile of shit without us. It is nothing. There's only people.
I can't prove it. That's not a scientific claim, because either interpretation is equally valid from a technical point of view. However, if you think the way I'm proposing, everything starts to make sense, and AI founders don't have to turn into insane maniacs. The benefits of thinking this way are so much better.
So don't ask what AI can do, because at that moment you're already lost in a fallacy, and there's no option for thinking well as soon as you've asked that question, and you don't need to ask that question because it's a stupid question.
[00:15:01] Beatrice: So is a better question to ask what can we do with AI? Is that a better question?
[00:15:07] Jaron: No, because AI is not a noun. It's not a thing. The only question is what can we do together with better tools for collaboration? That's a question.
[00:15:17] Beatrice: And if you think about that, and you think about the things that you get excited about, what are some of those things that we can do with better collaboration?
[00:15:27] Jaron: Oh, I can go on and on about it. So for instance, I already mentioned self-driving cars. Where do they come from? They come from all the drivers who don't get in accidents contributing data into a model. Self-driving cars are made of people. They are not made of some alien intelligence.
In science, there's a lot of talk about, oh, can the AI write papers or something? But actually, I don't see it that way. I work in obscure areas of math, and what happens in that area is that every time somebody discovers something, they have to make up a name for it to talk about it, but because everybody else did that, it's very hard to figure out what other people did.
But the very nature of the large language model is that it does multiple hops, and it can uncover all of those things, and all of a sudden you have the existing scientific literature revealed and illuminated in a way it wasn't before. That's huge. Now, somebody else will say, "Oh, the AI model is doing math." What's the difference between that claim and what I just said? It's not actually a functional difference. It's not actually a technical difference. It's a difference in framing and interpretation.
I think increasing the abilities of humans to collaborate is more glorious than pretending you have a new alien intelligence, even if technically the content of each interpretation is identical and neither has any formal or absolute advantage over the other.
But in terms of society and ability of people to function and not go crazy, the one I'm proposing is just so much better. I don't think it's an alien intelligence. And I... This whole thing is, in a way, it's very hard to communicate because it's so obvious. Sometimes the thing that's just right in front of your nose is the very hardest thing to talk about because people just get used to what's in front of their nose, so it becomes invisible.
So the obvious thing is that AI doesn't exist. Only people exist, and people's collaboration is the thing we call AI, and AI is just a way of creating a fantasy god that sends so much power to a very small number of people that they become crazy and unhappy. And there is no AI. But the benefits that people might claim for AI — if you understand them as human collaborations — all of a sudden those benefits can become real instead of confused and obscure.
[00:17:51] Beatrice: And so the way that I'm thinking about your framing now is that one of the sort of key things — and you can add if there are other things — of just viewing it this way instead of the standard way, is that it empowers us and keeps us sane rather than what's happening now. Is that a fair interpretation?
[00:18:13] Jaron: I would say it does more than that, because what it does is it clarifies what the models can actually do and cannot do. So for instance, if you think that the model is an alien intelligence rather than just a combination of what people have done, then you keep on expecting it to solve its own problems and stop hallucinating and stop having security problems and all these things.
But if you understand it as a compound of what people have done, none of that surprises you, and you can use it more effectively from the beginning rather than pretending that it's on this path to undo that. Because a forced massive collaboration of people is, of course, going to include all kinds of flaws, because it inherits the flaws of the people who are being combined, right?
So here's another example. The brightest spot probably in terms of function of large models lately — and we're discussing this in May of '26 — is in coding, where we've seen performance increases in large models applied to computer code, which is great. I think computer code has always sucked. It's always been like this embarrassment to computer scientists that most people can't do it. It's hard to maintain. It's always flawed. Better tools for it is a priority, and we seem to be finally starting to get there.
But here's the thing about it. I think the ceiling is going up in the sense that we have more and more examples of prompt-based coding being effective. Fantastic. Love it. Once again, based on all the code from people: no GitHub, no coding model. Let's be very clear about that.
But at the same time, we still have crazy failures, and I can assure you I generate them — and the failures tend to happen more with unusual projects where there's less precedent data. That's what you'd expect. It's not an alien angel that can synthesize every possible world. It's a combiner of what's happened, which is great.
A lot of code that's happened shares qualities with other code that's happened. If that wasn't the case, then the model wouldn't work at all, and so it tends to work better... if you ask it to make a simple website, bam, it's gonna do it 'cause that's happened so much and it's online so much. Many other examples like that. I've asked it to do some weird things. It gets harder. Still useful, still useful for those modules you can come up with that are more common.
But the point is, when you have in mind that this is coming from the community, you can focus your prompts and understand what you're doing and get much better results much more quickly. If you have a mental model that reflects what it actually is, all of a sudden you can use it better and you're not shocked when it fails in certain ways, and it just improves everything.
And but if you wanna believe the AI way of thinking — that there's this alien angel happening — then you're supposed to not think. You're supposed to just talk to it like you're talking to God, and just if it fails, you're supposed to say, "Then next year is when we'll get closer." That's stupid. What kind of moron are you? Why would you do that to yourself? The only benefit that you get out of it is the ego of somebody who's making themselves terribly unhappy and insane. There's absolutely no reason to think that way.
[00:21:35] Beatrice: Can I ask one last question on this
[00:21:37] Jaron: topic? Yeah, sure.
[00:21:38] Beatrice: Is this specifically for large language models, or do you think that would be for — just 'cause language is very obviously human in so many ways — or do you think that this would go for any type of AI model that we ever try to make?
[00:21:54] Jaron: Okay, this actually brings up another issue with the AI term, which is that the term AI as it's used doesn't refer to a specific thing, but rather an ideology, and so therefore all kinds of very different programs can be lumped together because they can be understood within that ideology, even if they don't have much to do with one another. So it's a sort of anti-scientific term.
In the history of science, all the way from, I don't know, Galileo, Kepler — like that early era — up until Turing, the whole idea of scientific culture was to find a form of truth that wasn't so dependent on human perception, human biases. We'd work with evidence, empirical method, a group process to try to counter our own biases. So that's why we have peer review, and we have all this whole structure to try to lose a little bit of our tendency to see the world the way we want to, and instead to struggle to see the world as it is.
But Turing says, "I really am interested in the computer being more than a machine. I want it to be like a person or something. So since I can't make some kind of personhood meter, and I'm not sure what intelligence is, and nobody knows, and the only thing we can do is if we successfully fool a person, then we'll treat that as a result."
Now, the Turing test is somewhat derided as antiquated in a lot of circles, but the truth is that the Turing test is still the paradigm for AI products, for AI culture, for everything. Science changed from being about not fooling people to being about fooling people. All of a sudden, science became theater, science became stage magic, science became a fraud, if you like. Everything is about can you fool the person?
And so because the culture of science had been the exemplar of seeking truth, I think science becoming corrupted into a form of persuasion instead of truth-seeking has radiated out and is the ultimate root cause of the truth crisis, where we can't really tell what's true anymore. Are vaccines the cause of autism? Eh. People do their best, but the scientific process has been captured by tech wealth, and now we have vaccine deniers killing science. But that's actually downstream from science shooting itself in the foot by choosing fooling people over the quest for truth.
So you're asking about the different kinds of programs that are all called AI. I think there are some kinds of programs that rely less on people than others. So the large language model only relies on data from people. There are some kinds of things that are solvers or optimizers or space searchers of various flavors that are often used in things like search — searching for new materials or that sort of thing. And some of these are... And there are a lot of models that are in between.
Some of the models that learn to play games might start with a database of human moves but then also combine optimization strategies. The thing is, if you choose to accept what I propose as the better framework, then even solvers or other types of algorithms that don't use giant data from people as their input, but rather just a representation of the problem space — even those are set up by people and operated by people, and you can still view them in a human-centric way.
[00:25:37] Beatrice: Yeah, that actually leads me to something that I would also like to hear your take on, which is how can we make the sort of trajectory that we're on currently — how can we improve this trajectory? And one of the ideas that you've discussed that I think is really interesting is the idea of data dignity. Could you expand a bit on that and what it would be useful for?
[00:25:57] Jaron: Absolutely. So right now the orthodoxy is that since we're creating a new super god thing under our own control to soothe our ever less happy and more insane egos — the closer we get to this wonderful goal, even though that's our official path in life in the industry — the alternative of thinking of it as a collaboration of people is technically unexplored by decree.
So for instance, if you wanted to, when you trained a large model, you could leave traces that would help you compute which training sources were the most influential in which output as you were running it. You might say, "Can you really?" Yeah, I would say we've demonstrated that. There's a small community of researchers who try to do it and try to research it, even though it's taboo and anti-doctrinal. But that doesn't mean people don't do it.
Let me start with a very simple example of why you'd wanna do this, leaving aside all the issues about society and the future of the economy and how people feel or any of that. Very practical question. Even as we get more impressive results from running large models, the failure modes of them don't seem to be improving, and we're not mitigating the worst of them.
The latest ChatGPT is good at making code. Great, love that. And yet it became... it talks about demons a lot for some reason, because we don't understand the latent spaces inside the models, and sometimes they're just crazy. All right, so that's not terrible, so they put a wrapper on it to tell it not to talk about demons. Fine.
But the thing is, sometimes these latent things are really dangerous. So we know from continued publications, even from within the industry — from researchers at Anthropic, for instance — we keep on seeing that weird kinds of jailbreaks and contravention of guardrails and hallucinations and just bizarre stuff are still possible and keep on happening.
And in a way, the more we rely on the models and the bigger the models get, the more dangerous these weird problems get. And so if you believe that they're becoming super intelligent, then you figure at some point they'll get intelligent enough to either kill us all or save us all. But whatever — at least these are just little flaws on the way to super intelligence.
But let me give you a different picture. Here's a different way we could do things. Let's imagine that we didn't try to hide the people. Let's say we don't want AI to be a black box, and if you open the black box, all that can be in there is all the people whose data was conglomerated. So what you do is you see the people.
Now, here's a scenario for you. You have a criminal who is stuck in a house. There are police outside trying to arrest the criminal. They hold up a phone to the items in the kitchen and they say, "Hey, big model, help me make a bomb quickly out of what's in the kitchen I can throw on the police." Now, most of us probably don't want them to do that. And if they do it in so simple a way, the guardrail will probably catch it, right? But not always. And we already know ways to get around that. And we've seen people use large models very recently to plan massacres and all kinds of horrible things. So we know the guardrails don't work.
But think about this. What if, while the model's operating, it goes back to the original training data and it says, "What clusters of similar training data would be the most missed if they were absent? What cluster, if it were missing, would create the most change in this result?" So for instance, the top 24 might include things about kitchens and ingredients. But tell me there's not gonna be one about bombs. There probably is.
And so, by going back to the people, we can finally semantically ground what the model is doing, which we can't do in any other way, right? And so we finally have a solution.
Now, here's another way to think about this. Let's go to authentication for security. For the longest time, we were trying to add more and more little tricks and folds to sign on to, to prevent fraud. So you'd have a CAPTCHA or a puzzle or some weird thing. Finally, we moved to multi-factor, where there's a code sent to your phone. It's an annoyance, but the thing is, because it's a whole separate channel, it's harder to contravene, and it's improved sign-in security a lot.
Can we do that for AI? Yeah, I just told you how to do it, because when you do this separate counterfactual clustering estimation — the thing I just talked about — when you go back to the people for semantic grounding, sure, that method is a separate model essentially running concurrently. It costs something. But the benefit is enormous because what it does is it gives you a second channel. It gives you multi-factor AI.
Now, I wanna note something else. In biology, brains don't have a single lobe. Brains are not this big monolithic, "Oh, it's just all glory to one model." Instead, there are multiple things that balance each other well. This is the same thing. At scale, we believe this actually becomes an efficiency rather than a cost.
But the thing is, AI is made of people, and the only semantic grounding in this case is to go back to the people. So now I think what I've just described is an outline for solving the security and danger problem with AI. And if you really think AI is about to murder all of humanity, here's probably the solution. This is the most likely way to avoid that. But the only barrier to it is your ideology that you're making a god.
And I know that when I talk this way, people don't like it, 'cause nobody wants their god killed, and I don't wanna talk about other people's religion. But in this case, come on — if we can just give up this stupid religion that's making everybody miserable, even the people at the center of it, if we can just give it up, then solutions open up that are otherwise taboo.
All right. So then, this is just about this one security incident with a criminal in the kitchen. But think through the potential future economy and—
[00:32:24] Beatrice: So I guess what I'm interested in is just — I've heard this idea come across from a few different people in different ways, and it basically sounds very attractive to me. I interviewed Glenn Weyl, for example, and he spoke about something similar.
[00:32:38] Jaron: Audrey Tang is another wonderful exponent of this, and there are a lot of really great people working on this. Glenn's great. And we don't all agree 100%, and we shouldn't. I don't believe in 100% consensus. But with Glenn and Audrey, you have two excellent figures looking at this perspective.
[00:32:55] Beatrice: Yeah.
[00:32:56] Jaron: Yeah.
[00:32:56] Beatrice: I guess what I'm interested in is, like, how do we get there? It's like, when we have this idea that seems great, what do you think needs to happen in order for us to actually get there?
[00:33:06] Jaron: Yeah. This is the problem of what do you do when you're combating an entrenched orthodoxy? What do you do when there's a mob that controls ideology and funding and power and the general sense of status and career and everything? What do you do? And I think what you do is what I'm doing now. You just try to articulate an alternative.
I'm continuing to do the technical research on the alternative, and it's hard. It's slow going. If you agree with the orthodoxy, it's easier to get papers through reviewers at conferences. If you agree with the orthodoxy, it's easier to get high-paying jobs, because the people who share the orthodoxy support each other. This is also the explanation of how there are so many AI leaders who have actually poor technical grasp of the models.
But you just — I just think, even if you're in a minority, if you just keep working and you keep slogging, ultimately there will be enough people of goodwill. And the last thing I would expect or even want is for people to agree with me 100%. But I think there's enough here that enough people should be able to find a new consensus on, that it'll promote itself based on those who still believe in truth over ideology. I really think we'll get there, to mutual benefit.
There's this view of the long-term future, which is very much like science fiction movies, which is that either everybody will be killed by AI very soon or we'll all be kept as pets or some variation of those things. I think Max Tegmark recently came out with seven variations combining the two or something like that.
But there's another one I wanna present to you that is typically not presented. All right, so right now the idea is that there'll be more and more classes of people who will be obsolete, and that there'll be an increasing percentage of people, and there'll be only a small or perhaps zero percentage of people who are actually needed for anything, except maybe the sacred owners of the computers or the digital hubs or something — but maybe even not them.
The perspective for OpenAI says that money might be made obsolete, and so the investors shouldn't expect to make any money 'cause it might not even be there, and so on. And various of the tech founders have said everybody will have infinite wealth and possibilities. Yeah, I don't think people will be able to go camp in Elon's living room if they want to. I think I still think we'll put up walls between each other. But anyway, let me present an alternate future.
What if there's a range of creative activities that are possible that we haven't imagined yet? What if in the future there can be new classes of creative people instead of new classes of obsolete people? What if there are new classes of people who come up with interesting things to do? Their data becomes aggregated in new revisions of AI models, so other people can also benefit from that, but they get paid — so they're creative professionals. They're paid to be creative in new ways.
What if there's an ever-expanding variety of new classes of creative people doing things we cannot imagine or articulate yet, because we believe in the future, and we don't believe we're the smartest, or the only smart people ever for all time? What if there's an exponentially expanding sphere of creativity in the future that we can't foresee, that people are proud of, that people are motivated to participate in, that's fundamentally surprising? What if? What about that future? That's also a future available to us.
And I wanna say something about this ever-expanding creativity future. With a lot of these AI futures, they're all or nothing — either the AI will kill us all or keep us as pets, but it's like there's not a lot of in between there. And any time somebody has scenarios that don't have wiggle room, I really worry, 'cause a really brittle scenario is not the scenario that's gonna happen.
But the great thing about the ever-expanding creativity scenario is if it only happens 10% or 30% or whatever, it's still great. So if we allow for it, even if it only happens partially, it's a wonderful thing. But we're not allowing for it because of the ideology of wanting to believe in AI instead of in ourselves.
There's another issue, which has to do with meaning and reality and continuity. I'm a radicalist. I want the future to be radical. I want my great-grandchildren to be flying faster than light without the need for a spaceship between the stars. I want them to be morphing their bodies into crazy things I can't imagine. I want a weird future. I love that. But I want it to be continuous. I want to have a line of memory all the way back so that things are grounded and meaningful and lessons can be learned.
I don't want singularity. A singularity by definition is dementia. It's forgetting. And so I don't want that. I don't want that information loss. And this idea of an ever-expanding creative front in the future can support this continuity. It can be grounded. You can have radicality and groundedness at the same time. That's the future we should want.
[00:38:42] Beatrice: Yeah. I really like that vision, by the way. I feel like it's some sort of version of Bostrom's concept of Disneyland with no children — if we get there without any memory, we can't enjoy it.
But I think what I wanna make sure we have time to talk about also is that a lot of this connects to the creativity part that you've talked about. I actually read something today that I thought was really interesting, which was someone was talking about how the genius of Einstein — they were posing the question, why don't we have more Einsteins now that we have the internet, 'cause there's so much knowledge. But that Einstein's biggest strength wasn't just that he had a bunch of knowledge, and they said it was he was able to imagine and be creative.
So I think that fits quite well. But this relates to, I think, a quite common thing when we try to envision the positive futures that don't make people obsolete — it still makes them obsolete by using UBI as the sort of way that we can make it so that people have bearable lives. But you, as I understand it, don't think that UBI is a good solution.
[00:39:50] Jaron: No, I'd rather have a future... This is a slightly complicated issue. I think UBI is a mistake. And for those who don't know, we're talking about universal basic income. The reason I think it fails is that it's politically unstable.
So as soon as — no matter how much you try to weave it with blockchain and decentralize this and that — ultimately there'll be some center point that can be seized. And this is an important fallacy in the tech community, that if you decentralize an architecture, there isn't a central hub that benefits. All of the open source and decentralized people who've tried to do that have ended up instead propping up entities like Google, because what ultimately happens is every low friction system will create some super powerful nodes that compound themselves and become ultra-influential and gain control.
It's just not possible to evade that because of network effects. It's math. And it might not be immediately apparent how it'll happen, but even if you have a super decentralized system, people will still need to communicate about it, and at some point there'll be some layer where the centralized node emerges.
Now, as soon as that centralized node emerges, it becomes a target to be seized by terrible people. So the way I sometimes put this is you might start with Bolsheviks, but you'll end up with Stalinists. What happens is, as soon as... And this is what's happened with communist experiments, typically going back for many decades now, centuries. When you create this new center of power that's supposed to offset the evils of the previous center of power, it'll be seized because centers of power are relatively easy to seize, and it might be seized by a theocracy, it might be seized by a techno autocrat, it might be seized by a criminal organization, whatever. But it'll be seized by whoever gets it.
And so I just think UBI is fundamentally unstable. And a market economy definitely has tendencies to be unfair and definitely has problems, but if it's combined with other layers of society, it is a true force for decentralization. And that's the reason why a lot of societies that are the most pleasant to live in have both non-market and market layers combined, and they do that in different proportions. The Scandinavian model is different from the American one, but all of the ones where you actually want to live have some kind of combination of different layers that check each other — like the different lobes in the brain I was talking about before, right?
Power has to be somewhat confused in order to not be horribly toxic. It just has to be, all right? And so a market economy based on information creativity, where people receive some sort of market-based payment for the information they contribute to the AI, is the better solution. It's the only way to distribute the economic power of the AI. It's the only way to confuse the power of the center that inevitably will emerge in any digital system. And it's doable. I described a little bit about how to do it earlier.
So what we really want is some sort of a system where if somebody makes a contribution that is inevitably part of a cluster with other people who are making related contributions, and it's used a lot, they should have some way of making money from it. Now, there are a lot of counterarguments that, oh, it would just be pennies, it would be terrible, it wouldn't be very much, whatever. But that's all based on how you set it up. That's based on your cultural attitude.
People who think that capitalism is based just on some sort of pure market action with no values don't understand capitalism. Capitalism is always a reflection of standards. It's always somewhat distorted in order to work at all, period. Otherwise it can't function. And so can we come up with economic solutions that would support a large part of society on some sort of market basis based on data value? Yeah, absolutely we can.
Can we do it for 100% of society? I don't know. Probably not, but maybe. That's an interesting frontier sort of a question. But should we do it for as much of society as we possibly can? I really believe so.
And this UBI problem has really been an issue lately. For instance, I've been on both sides of a recent prominent lawsuit. I'm on the board of the Authors Guild and prime scientist at Microsoft. And so when the Authors Guild sued OpenAI and Microsoft over data use from authors, I was actually called in to be deposed by both sides, which is insane. I call it capitalist yoga, where you see yourself or something.
I went into it — I was very transparent, so everybody knew that this was gonna happen. But the problem is I had to disagree with both sides, because the Authors Guild was going for a class action result, which is the available legal remedy in the United States and really in most of the world, where everybody gets a little bit and some giant sort of rough accounting. But that's a step towards UBI, which I really don't support. I really think that was — as difficult as it is to find an alternative — that's not the right alternative. And meanwhile, I do think people should be paid for their data, but I think it has to be on some sort of market demand, individual use basis so that it has some reality and it's not gonna be administered by some central authority. So a bit of... I hope that's not too much answer, but I hope that gives you a sense of where I'm coming from on it.
[00:45:29] Beatrice: Yeah. No, and I think that's a really interesting point, because obviously I think everyone gets some sort of feeling when they think about UBI that maybe it won't work. But I wanna ask you one last question, which is — I think we've seen throughout this conversation, and also the point that you just said that you were on both sides of this lawsuit — do you have any recommendations, or just shouts of support, for people who are trying to be both techno-optimist and show all the good that technology can do, but are also feeling the need to maybe point to things that are not looking great right now? Do you have any sort of recommendations for holding that balance? 'Cause you're obviously — you said you had a unique clause in your contract, for example, to be allowed to criticize. Should everyone have this allowance?
[00:46:15] Jaron: I want to point out, so I have an agreement with Microsoft that I speak my mind, including sometimes criticizing Microsoft, but I'm always clear that I'm not speaking for Microsoft, which is certainly reasonable. And somehow it has not resulted in any harm to Microsoft or shareholders or customers. In fact, I would argue that it's been to their benefit. And so my question is, why isn't this more common in the tech world? Why aren't there more people like me in Google or Meta?
Like in Meta, people always have to leave and become whistleblowers in disgust. Like, why? Why? Why — this should be a good thing. This is — I think this makes everything better, including the companies. Why isn't it done more often? What are we afraid of? I really just call on Meta, Google, the Musk companies, Amazon — all these people — instead of having people become depressed and quit in disgust and become whistleblowers and then testify about embarrassing things in Congress, let them talk. Let them talk. Take them seriously. We're becoming like governments — let's act like governments and have citizens who can speak. It doesn't do any harm. It's good. All right.
So that's thing number one. And not that I get everybody at Microsoft to agree with me, obviously. That's okay. That's fine. But we need to be able to talk. We need to be able to listen to each other. We need to respect each other. It's really important.
So then this other thing about — in a way, optimism versus pessimism has itself become corrupted, because in the AI world, if you're not totally bought in, you're called a pessimist. I have people coming to me all the time like, "Why are you such a pessimist about AI?" And I'm not the pessimist. I'm the person talking about this expansive future. I'm letting my car drive me around 'cause I think it's better than me. I'm the optimist. You're the pessimist 'cause you're telling me that AI might be about to kill everybody.
But to them, optimism means that they're about to make a god, and pessimism means they're not about to make a god. And so the whole term has become bizarre. So there's some kind of grounded optimism that's real optimism that has become a little elusive, because the dominant storytelling demands that we not even articulate it. It doesn't even wanna criticize it 'cause it doesn't wanna admit it exists.
And so what I wanna say to people who feel optimistic is there's a lot of reason to look at history. 'Cause right now we're like sort of Pol Pot — we're saying history's gonna be over, we're gonna have year zero and nothing else matters, only what we're doing. Oh, fuck that. If we look at history, we find grounds for optimism because we can see how humanity has gone through tight squeezes and made it before. We made it through the fascist empires of mid-century and the communist empires of mid-century, which were horrible. We've made it through plagues, which were horrible. We've made it through all kinds of failures of empire. We've made it through horrific technology failures and societal stresses. All kinds of things have happened, and we've somehow found a way. We're pretty resilient. We should be optimistic about ourselves. We have evidence.
The key to evidence is to not ignore the evidence. And in this case, there's a lot of actually pretty good evidence. My view of history, as horrible as it's been, is I actually kinda think I see a trend line improvement. I see fewer people in misery and fewer people dying as history proceeds, and I like that. So I think optimism, in a grounded sense instead of in a fantastical, anti-human sense — real optimism is well-motivated.
And also, I don't know how to put this, life is great. Just stop for a second and marvel at how lovely it is to be alive and how many... We can play music, we can... I don't know, there are so many wonderful things. I think happiness and optimism are highly rational if we just give ourselves a chance. And I see the tech elite all the time. I'm in their world, and they're becoming more and more stressed and unhappy and crazy all the time, and it's awful. There's no reason for it.
[00:50:43] Beatrice: Yeah. I think that's a great note to end on. We have a lot of things to be hopeful about and a lot of things to enjoy. So yeah, thank you so much for taking the time, Jaron.
[00:50:51] Jaron: Thanks for being interested.
Universal basic income (UBI): A policy proposal in which all citizens receive a regular unconditional income from the government, regardless of employment status. Explainer by the IMF.