In this episode of the Existential Hope Podcast, visionary inventor and physicist Mary Lou Jepsen challenges us to look past the dystopian fears surrounding brain-computer interfaces (BCIs). She argues that the same technology poised to cure our most devastating diseases could also unlock a future of radically enhanced human communication and empathy.
Mary Lou shares her vision for a future where we can communicate with a bandwidth far greater than speech, potentially ending loneliness, fostering deep understanding, and even preventing war. She explains how new, non-invasive approaches to neurotechnology could make this a reality, and critiques the media's narrow focus on single companies, calling for a broader appreciation of the global scientific effort. We also dive into the critical ethical questions, from the creation of "neuro-rights" to the personal choice of whether to share our innermost thoughts, and why she believes we must engage with these possibilities with hope, not fear.
In this conversation, we explore:
What if the technologies we fear most could help us become more human? In this vision, brain-computer interfaces don’t isolate or surveil us—they dissolve misunderstanding, cure mental illness, and expand our capacity for empathy. By increasing the bandwidth of communication and letting us truly “step into someone else’s mind,” neurotech could help overcome one of humanity’s oldest bottlenecks: ourselves.
‍
Mary Lou Jepsen is a pioneering inventor, entrepreneur, and scientist working at the intersection of imaging, neurotechnology, and healthcare. She is the founder and CEO of Openwater, a company developing cutting-edge technologies to replace bulky hospital machines with wearable, high-resolution imaging tools—potentially transforming how we diagnose and treat disease. Formerly an executive at Facebook, Google[x], and Intel, and co-founder of One Laptop per Child, Mary Lou’s work pushes the boundaries of what’s possible in human-computer interaction, with bold implications for brain-computer interfaces, radical healthcare access, and the future of communication.
Sora is a GenAI tool from OpenAI.
What if the technologies we fear most could help us become more human? In this vision, brain-computer interfaces don’t isolate or surveil us—they dissolve misunderstanding, cure mental illness, and expand our capacity for empathy. By increasing the bandwidth of communication and letting us truly “step into someone else’s mind,” neurotech could help overcome one of humanity’s oldest bottlenecks: ourselves.
‍
Beatrice Erkers:
This is the Existential Hope podcast, so I want to hear what your favorite positive vision of the future is. It can be as wild and as weird as possible—whatever comes to mind. It could even be just a technology.
Mary Lou Jepsen:
You asked this question—I think it was one of the ones you sent. I know you've spoken a bunch about telepathy, for example. That would be fun.
I'd say this technology makes life better. There's a way of looking at all changes in history—James Burke wrote a book about it when I was coming up in the 70s called Connections—a way of looking at history where all changes are from technology. Now, handling technology is important, but killing technology and saying, "We don't want to know about it, we want to bury our heads in the sand" is a life I don't want to lead. So, the vision of the future for me, I suppose, is one where everybody can believe that learning and knowledge are interesting and are the way to make a better life.
In specifics, sure, if we can enable brain-computer interface, is that the end of war because we'd actually understand ourselves? Or is it our nature to want to control others, as many of them as possible? Is it the end of love? Is it the beginning of understanding? Can we do more together in a collaborative way? I mean, all of that will have to be worked through, and many people will try different government systems as that comes up.
But it could eliminate our loneliness and isolation. So there's this question of, how do we want to be? And how do we talk about what we want to be? Certainly, it would be very convenient in terms of communication. I think it'll be tokenized, basically. You'll let certain parts of what you want to share out, and you'll keep what you want as your innermost thoughts to yourself. Or perhaps you'll have relationships where you wish to share your innermost thoughts, where they can be exploited—and you know that even exists now and has for all of time, if you read any great literature.
So, do we still want to understand how the brain works? Do we not want to understand how the brain works when mental disease and neurodegenerative disease take so many lives? By understanding how the brain works, in successive approximation, asymptotically getting to that understanding is interesting. But the brain and body are made of the same essential things, so let's not delete the body from it, because those have diseases and things that we could optimize as well. And they're connected by hormones, which are the only things that act at a distance, where the brain can communicate with the belly or the leg or what have you. And the nerves run down, of course, the long nerves.
There's all this fear. It’s in basically every movie I've ever seen about it. And yet there's so little done on what we could be capable of if we could communicate better. And there's a belief that we should be able to communicate better if we could communicate with higher bandwidth than speech or text. So why not try it, since we have to do it anyway to cure mental disease and neurodegenerative disease?
Although, maybe it's nice to end with the words of the great Paul Allen, co-founder of Microsoft, with whom I talked about this a lot. He founded the Allen Institute for Brain Science. And he would say, "Mary Lou, there's five Nobel prizes just to understand how a neuron works. We're not going to get there." And I was like, "Paul, you go win those five Nobel prizes. I can use a cell phone without understanding how a transistor works or how quantum works." So there are things that you can do with it that could be quite powerful.
We're scared of people knowing what we think. Apparently—I'm not saying anybody's lying—but apparently you hear 25 to 30 lies a day from people around you. They might just say, "That was delicious," about whatever I made, and they're just trying to be nice, but it wasn't delicious to them. It could be just our notion of manners, and those would have to change to be more truthful. Or we put filters on it. Whatever. There are solutions to it.
But I think this great negativity, that we have to approach brain-computer interface from a negative point and not a positive point—except for quadriplegics and stuff like that—is somewhat absurd. And I don't quite understand the negativity. There are some great sci-fi novels that explore it from a positive perspective.
But I'm not actually working on that right now. I decided that it's actually easier to focus on curing all cancer, mental disease, neurodegenerative disease, and cardiovascular disease with a universal device. And that's not controversial. But where that leads is a step in getting to brain-computer interface. And other people are working actively on it.
I keep getting asked by reporters what I think of Neuralink. And what I think is that there are a hundred other companies doing much more interesting work than Neuralink. And they don't publish that because Elon gets clicks. I think one thing that would be helpful is to stop. Elon gets a lot of press. Let's talk about all the other great work that all the other great neuroscientists and neuro-companies are achieving from lots of different points of view. We're working from a certain point of view. We look at the work of some other great companies and researchers and just people that have worked 40 years in this field and have accomplished great things in our understanding of it. I guess it's just the nature of... is anybody bored of Elon yet? We all know about him. He's achieved a lot of things. We know he's got some flaws. He might argue he doesn't, but whatever. But enough. Let's talk about Jack Gallant, or Tom Oxley, or Ed Boyden. There are all these people that are doing great work, and there's a bunch of others that aren't even on that pedestal that are actually doing the work.
Have discussions with them. The NeuroRights Foundation is a pretty interesting organization, speaking of that. The law just changed in Montana: neuro-rights will be private. Your neuro-data in Montana will be private. You can keep it confidential. It would be the extension of the "right to be forgotten." Chile did that a number of years ago; it got revoked, but the state of Montana recently passed it. So if the issue is having all your thoughts out on the internet—no, you can have confidentiality and privacy. The way Chile did it is they said it's part of your medical data. That's the way we're approaching it with ours; it's part of the regulatory approval. So you have the privacy laws, if you wish them, from healthcare. I happen to think HIPAA is too constrained and have released my data—anti-HIPAA, like everybody can have all my data. But it's a choice, and you can make that choice and you can change your decision if you want to keep something private.
We can have all those discussions, but there's this notion that it's all over. The reason Openwater is called Openwater came from an essay by Peter Gabriel, who's an investor and actually put in sweat equity in the starting of Openwater. He wrote an essay about the future of brain-computer interface, basically saying we have to take swimming lessons to learn about our thoughts flowing like water, and that they will flow everywhere to everybody, and how will we learn to deal with that? It's something we need to learn to deal with, but it's inevitable. It enables interspecies communication. It enables us to understand each other and language and culture and so forth, or at least start to get at that. There'll be misunderstandings, there always are.
It's funny. Jason Pontin, when I told him about it, thought it would be the end of love—that you need secrecy to be really in love. Which I thought, really? Do you think so? He was the former editor-in-chief of MIT Technology Review. But I remember doing a podcast with Fox News and the guy, Greg Gutfeld, he said that he thought it would be the end of war, because we'd understand each other. A Fox News guy. Really? I think we'll probably find some way to have war anyway, sadly. But that would be great. That's a nice vision.
Beatrice Erkers:
I think it's a nice vision, and maybe a nice vision to wrap up this interview. Because I think a bottleneck in a lot of the things we've been talking about is human nature itself—the challenges in communication, biases, and so on. There are obviously a lot of challenges to having your thoughts open to everyone. But what I think is really exciting are the applications of neurotech and how they could perhaps expand empathy and things like that. The "end of war" point is a nice one to end on. Fingers crossed.
Mary Lou Jepsen:
Yes, empathy is what it can enable. And also being able to step into somebody's shoes. Can we get into somebody else's brain? That would enable that, too. I did that with hormones after my surgery. I tried the dosage of a guy in his 20s, and my gosh, I couldn't handle it. It was really tough. But I got along with guys a lot better after that. I was angry all the time, I thought about sex constantly, and—as a positive—I was the smartest person in the entire world. Of course, I got the dosage wrong; you guys aren't really like that. But it gave me this understanding of why teenage guys, or at least it seemed to me, behave like that.
‍