“Perhaps the most likely type of existential risks that could constitute a Great Filter are those that arise from technological discovery. It is not farfetched to suppose that there might be some possible technology which is such that (a) virtually all sufficiently advanced civilizations eventually discover it and (b) its discovery leads almost universally to existential disaster. So where is the Great Filter? Behind us, or not behind us?” Nick Bostrom
“The Universe is a dark and foreboding place, suspended between alien deities. Cthulhu, Gnon, Moloch, call them what you will. Somewhere in this darkness is another god. He has also had many names. In the Kushiel books, his name was Elua. He is the god of flowers and free love and all soft and fragile things. Of art and science and philosophy and love. Of niceness, community, and civilization. He is a god of humans. The other gods sit on their dark thrones and think “Ha ha, a god who doesn’t even control any hell-monsters or command his worshippers to become killing machines. What a weakling! This is going to be so easy!” But somehow Elua is still here. No one knows exactly how. And the gods who oppose Him tend to find Themselves meeting with a surprising number of unfortunate accidents. There are many gods, but this one is ours.” Scott Alexander
- Existential Risk: An Introduction - Andrew Critch.
- Existential-risk.org. Also: Existential Risk Resources - Bruce Schneier. Both are slightly dated (2015) but the links are still relevant.
- Existential-risks.com - roadmaps on all existential risks and subsets like AI, biotech, nanotech, nukes, natural risks etc.Existential Risks and different scenarios.
- S-Risks: Why They Are The Worst X-Risks and How to Prevent Them - Max Daniel. S-risks.org - introduction and further essays on reducing risks of future suffering.
- The Fermi Paradox - WaitButWhy. Dissolving the Fermi Paradox - Anders Sandberg, Eric Drexler, Toby Ord.
- Doomsday Argument - Nick Bostrom. Also: The Anthropic Bias
- The Vulnerable World Hypothesis - Nick Bostrom. Robin Hanson’s response.
- The Case for Reducing Extinction Risks - Benjamin Todd..
- Unknown Unknowns - Alexey Turchin on yet unknown unkown Existential Risks.
- The Expected Value of Extinction Risk Reduction is Positive - Jan M. Brauner, Friederike M. Grosse-Holz.
- Long-term Trajectories of Human Civilization- Seth Baum et al on long-term trajectories of civilization as field of study, and distinguishing status quo trajectories, catastrophe trajectories, technological transformation trajectories, and astronomical trajectories as possibilities.
- The world’s biggest problems and why they’re not what first comes to mind - 80.000 Hours.
- Managing Existential Risks from Emerging Technologies - Nick Becksted, Toby Ord.
- Existential Risk: Diplomacy & Governance - Sebastian Farquhar, John Halstead Owen Cotton-Barratt,Stefan Schubert, Haydn Belfield, Andrew Snyder-Beattie.
- Superintelligence - Nick Bostrom focuses on AI risks but makes a more general case for taking Existential Risk seriously.
- The Doomsday Machine - Daniel Ellsberg on the precarious past and present state of our global nuclear weapons situation.
- War on the rocks - issue tracker of current geopolitical developments
Coordination Failure & Civilizational Decline
- Part 1: Ethics, Now and Tomorrow & Part 2: Racing Where? Debate with Robin Hanson, Paul Christiano, Peter Eckersley, Christine Peterson, Alyssa Vance, and Mark Miller on the trajectory of civilization.
- Robin Hanson wrote this piece on Dreamtime and races to the bottom, this piece on Civilization, this piece on Civilization vs. Humans , this on Value Drift, and Age of Em. Scott Alexander and Robin had this written discussion on whether those futures are worth living.
- Meditations on Moloch, Growing Children for Bostrom’s Disneyland - Scott Alexander on multipolar traps. Slaying Alexander’s Moloch, and Capturing Gnon And Naive Rationalism are two responses. Moloch’s Toolbox - excerpt on coordination problems.
- My Outlook - Paul Christiano on the probability of civilizational survival, and the relative influence, distribution and entrenchment of human values
- The Future of Human Evolution - Nick Bostrom on two possible negative trajectories for civilization. Also: Coordination Problems in Evolution - Martin Sustrik
- The chapters on The Craft And The Community in Rationality: From AI to Zombies. Especially good chapters are Rationalists vs Barbarians, Church vs Task Force, Can Humanism Match Religious Output, Why Our Kind Can’t Cooperate.
- Governing Boring Apocalypses - Matthijs Maas on the need to focus on general strategies to make civilization more robust in addition to working on specific risks.
- We Are Not Saved - Jeremiah’s podcast combining Taleb’s Antifragility with Mormonism.
- The Fall of Civilizations - historical deepdive into the decline of previous civilizations.
- Daniel Schmachtenberger: Solving the Generator Function for Existential Risks - Daniel Schmachtenberger. Here as a blog post.
- The LongNow’s project on Creating A Manual For Civilization is now being extended and digitized by the Internet Archive.
- 21 Lessons for the 21st Century- Yuval Harari.
- Re.Silience - blog on resilience and systems-thinking.
- GCR Organization Directory - Global Catastrophic Risk Institute lists organizations focused on catastrophic and existential risks.
- Future of Humanity Institute - Future of Humanity Institute (FHI) is a multidisciplinary research institute working on Existential Risk at the University of Oxford.
- Center for the Study of Existential Risk - CSER is an interdisciplinary research centre within the University of Cambridge dedicated to the study and mitigation of risks that could lead to human extinction or civilisational collapse.
- Future of Life Institute - The Future of Life Institute is a volunteer-run research and outreach organization in the Boston area that works to mitigate existential risks facing humanity, particularly existential risk from advanced artificial intelligence.
- Global Catastrophic Risk Institute - A think tank leading research, education, and professional networking on global catastrophic risk.
- Global Priorities Institute - Our Vision: A world in which global priorities are set by using evidence and reason to determine what will do the most good. Our Mission: To conduct and promote foundational academic research on how most effectively to do good
- Center for Effective Altruism - The Centre for Effective Altruism helps to grow and maintain the effective altruism movement.
- EA Newsletter - EA Newsletter Archive and Sign-up
- EA Hub - EA Hub for projects, people, and news on EA
- Open Philanthropy Project - Through research and grantmaking, we hope to learn how to make philanthropy go especially far in terms of improving lives.
- 80 000 Hours - We’re here to give you the information you need to find that fulfilling, high-impact career. Our advice is all free, tailored for talented graduates, and based on five years of research alongside academics at Oxford.
- GiveWell - GiveWell is a non-profit dedicated to finding outstanding giving opportunities through in-depth analysis.
- ALLFED - working on planning, preparedness and research into practical food solutions so that in the event of a global catastrophe we can respond quickly and save lives and reduce the risk to civilization.
- Berkeley Existential Risk Initiative - Our mission is to improve human civilization’s long-term prospects for survival and flourishing. Currently, our main strategy is to take on ethical and legal responsibility, as a collaborator, for projects deemed to be important for reducing existential risk.
- Foundational Research Institute - Our mission is to identify cooperative and effective strategies to reduce involuntary suffering. We believe that in a complex world where the long-run consequences of our actions are highly uncertain, such an undertaking requires foundational research.
- Foresight Institute - Foresight Institute is a leading non-profit research organization focused on technologies of fundamental importance for the human future, focusing on molecular machine nanotechnology, cybersecurity, and artificial intelligence.
- John Garrick Institute for the Risk Sciences - The advancement and application of the risk sciences to save lives, protect the environment and improve system performance.