“A great design appears at first insane; \
But chance will soon seem quaint and blind, \
And such an exemplary thinking brain \
Will soon by thinkers be designed”
- Goethe, ‘Faust’
- Slow Takeoff vs Fast Takeoff - Paul Christiano.
- The AI Foom Debate- Robin Hanson and Eliezer Yudkowsky on hard vs. slow take off. Also as video here: Yudkowsky vs Hanson: Singularity Debate. As background: Intelligence Explosion Microeconomics - Eliezer Yudkowsky.
- Artificial General Intelligence is Here and It’s Useless - George Hosu. Against AGI risk.
- Malicious Use of AI Report Miles Brundage et. al.
- AI as positive and negative factor in Global Risk - Eliezer Yudkowsky.
- Rationality: From AI to Zombies Eliezer Yudkowsky. Especially My Naturalist Awakening, That Tiny Note of Discord, Sympathetic Minds, Sorting Pebbles into Heaps, No Universally Compelling Argument, The Design of Minds in General, Detached Lever Fallacy, Ethical Injunctions, Magical Theories, Fake Utility Functions.
- Orthogonality Thesis - Nick Bostrom on why intelligence and goals are orthogonal.
- Basic AI drives - Steve Omohundro argues that sufficiently advanced AI systems will develop instrumentally useful subgoals that may be harmful, e.g. acquiring more computing power.
- AI Now 2019 Report - Kate Crawford et al.
- Reading Guide for the Global Politics of Artificial Intelligence - a comprehensive reading list by Allan Dafoe.
- Smart Policies for Artificial Intelligence - Miles Brundage, Joanna Bryson.
- A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy, Seth Baum.
- Policy Desiderata in the Development of Machine Superintelligence - Nick Bostrom, Allan Dafoe, Carrick Flynn.
- Deciphering China’s AI Dream, - Jeffrey Ding.
- Avoiding the Precipice: Race Avoidance in the Development of AI - Olga Afanasjeva, Jan Feyereisl, Marek Havdra.
- Racing to the precipice: A model of artificial intelligence development - Stuart Armstrong, Nick Bostrom, Carl Schulman.
- Beyond MAD?: The Race for Artificial General Intelligence - Roman Yampolski.
- AI And The Future of Defense - Stephan De Spiegeleire, Matthijs Maas, Tim Sweijs
- Governing Boring Apocalypses: A New Typology of Existential Vulnerabilities and Exposures for Existential Risk Research - Hin-Yan Liyu.
- Unilateralist’s Curse: The Case for a Principle of Conformity - Anders Sandberg, Nick Bostrom, Tom Douglas
- Guide to working in AI policy - Miles Brundage.
- Strategic Implications of Openness in AI Development - Nick Bostrom.
Hope & Fiction
- pretty comprehensive list by Future of Life Institute on AI safety orgs
- AIsafety.com - virtual reading list on AI safety, meets every Wednesday
- Partnership on AI - Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.
- Machine Intelligence Research Institute - MIRI’s artificial intelligence research is focused on developing the mathematical theory of trustworthy reasoning for advanced autonomous AI systems.
- OpenAI - OpenAI is a non-profit artificial intelligence research company that aims to promote and develop friendly AI in such a way as to benefit humanity as a whole
- DeepMind - AI company, bought by Google. Has strong safety focus. Now established Deepmind Ethics & Society Unit
- Center for Human-Compatible AI - CHAI’s goal is to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.
- Leverhulme Centre for the Future of Artificial Intelligence - A global community to ensure that AI benefits all of humanity
- Future of Humanity Institute - Future of Humanity Institute (FHI) is multidisciplinary research institute working on Existential Risk at the University of Oxford.
- Future of Life Institute - a volunteer-run research and outreach organization in the Boston area that works to mitigate existential risks facing humanity, particularly existential risk from advanced artificial intelligence
- Foresight Institute - Foresight Institute is a leading non-profit research organization focused on technologies of fundamental importance for the human future, focusing on molecular machine nanotechnology, cybersecurity, and artificial intelligence.