SECTIONS

INTELLIGENCE

With greater intelligence, what new questions could we explore and what answers could we find?

What if AI could revolutionize the way we tackle global challenges? Advances in AI can transform how we solve problems and make decisions, in almost all areas, from healthcare to climate change.

Learn all about intelligence under "Intro," understand potential risks under "Risks," discover the most hopeful scenarios under "Hope," and find out how to get involved under "Action."

Intelligence 

‍

Artificial (General) Intelligence

‍

Near-term 

  • Malicious Use of AI Report - Miles Brundage et. al. Report on various risks arising from near-term and longer-term progress in AI, and potential policy and technology approaches to address those risks.
  • Slaughterbots - Stop Autonomous Weapons. Fictional short video on the dangers of lethal autonomous weapons.
  • Information Security Concerns for AI & The Long-term Future - Jeff Ladish. Introduces information security as a crucial problem that is currently undervalued by the AI safety community.
  • Teachable Moment Dual Use - Lawfare Podcast. Interviews two scientists who created an AI-powered molecule generator that could design thousands of new biochemical weapons within hours.

‍

Intelligence Takeoff 

‍

Alignment

  • AGI Ruin: A List of Lethalities - Eliezer Yudkowsky. Forty-three reasons that make Yudkowsky pessimistic about our world being able to solve AGI safety. A longer list of dangers can be found in Rationality: From AI to Zombies - Eliezer Yudkowsky. Especially My Naturalist Awakening, That Tiny Note of Discord, Sympathetic Minds, Sorting Pebbles into Heaps, No Universally Compelling Argument, The Design of Minds in General, Detached Lever Fallacy, Ethical Injunctions, Magical Theories, Fake Utility Functions.
  • The Basic AI Drives - Steve Omohundro. On fundamental drives that may be inherent in any artificially intelligent system and their dangers.
  • Orthogonality Thesis - Nick Bostrom. On why an increase in intelligence does not have to be correlated with alignment in human values. 
  • AI Alignment & Security - Paul Christiano. On how the relationship between security and alignment concerns is underappreciated. 
  • Eliciting Latent Knowledge - Paul Christiano, Ajeya Cotra, Mark Xu. On how to train models to elicit knowledge of off-screen events that is latent to the models.
  • Artificial Intelligence, Values and Alignment - Iason Gabriel. On philosophical considerations in relationship to AI alignment, proposing to select principles that receive wide-spread reflective endorsement

‍

Coordination

‍

Fiction

  • Daemon - Daniel Suarez. On the disastrous near-term implications of AI.
  • Autonomous - Annalee Newitz. On dangers of biotechnology, AI and the intersection of both.
  • After On - Rob Reid. on near-term risks of AI-infused social-media.‍
  • That Alien Message - Eliezer Yudkowsky. On AI revealing itself to humanity.

Strategies

‍

Value Alignment

‍

Decentralized AI

‍

Fiction

  • Understand - Ted Chiang. On the promises of dramatically enhanced understanding.
  • GPT-3 Fiction - Gwern. Fiction written by GPT-3.
  • AI Aftermath Scenarios - Max Tegmark. Surveys twelve potential long-term scenarios arising from AI, classified according to different utopian or dystopian ideals.
  • The Three Body Problem - Cixin Liu. Sci-fi classic on AI alien contact.

‍

Staying Up to Date

  • AI Alignment Newsletter - A weekly publication with recent content relevant to AI alignment with over 2600 subscribers. See also the AI Alignment Database spreadsheet.
  • AI Alignment Forum - A single online hub for researchers to discuss all ideas related to ensuring that transformatively powerful AIs are aligned with human values. 
  • AIsafety.com - AI safety reading list meeting regularly virtually.
  • Beneficial AI Conference - FLI Youtube channel. Conference recordings of the Beneficial AI conference with leading researchers and practitioners in the field. 
  • AI Safety & Policy Job Board - 80,000 Hours.
  • Any of the newsletters of the organizations below.
  • AI Safety Support: Lots of Links. A collection of resources for everything AI Safety, including fellowships and training programmes, news and updates, research agendas, AI Safety organizations and initiatives, major publications, and support.

‍

Organizations on the Cause

  • Guide to Working in AI Policy - Miles Brundage. On potential roles, requirements, and organizations in AI policy.
  • AI Safety Camp - Connects you with an experienced research mentor to collaborate on their open problem during intensive co-working sprints – helping you try your fit for a potential career in AI Safety research.
  • MIRI (Machine Intelligence Research Institute) - MIRI's artificial intelligence research is focused on developing the mathematical theory of trustworthy reasoning for advanced autonomous AI systems. 
  • OpenAI - A non-profit artificial intelligence research company that aims to promote and develop friendly AI in such a way as to benefit humanity as a whole.
  • DeepMind - Especially the Deepmind Ethics & Society Unit. See also: The Mind of Demis Hassabis for an overview of the thinking of DeepMind’s co-founder.
  • Center for Human-Compatible AI - CHAI's goal is to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.
  • Anthropic - A research company that’s working to build reliable, interpretable, and steerable AI systems.
  • Future of Humanity Institute - Future of Humanity Institute is a multidisciplinary research institute working on Existential Risk at the University of Oxford.
  • Future of Life Institute - A volunteer-run research and outreach organization in the Boston area that works to mitigate existential risks facing humanity, particularly existential risk from advanced artificial intelligence.
  • GovAI - Building a global research community, dedicated to helping humanity navigate the transition to a world with advanced AI
  • Leverhulme Centre for the Future of Artificial Intelligence - A global community to ensure that AI benefits all of humanity.
  • AI Objectives Institute - The objective of this organization is to help humanity pick better objectives for AI systems, markets, and other large-scale optimization processes.
  • Aligned AI - A benefit corporation dedicated to solving the alignment problem – for all types of algorithms and AIs, from simple recommender systems to hypothetical superintelligences.
  • Ought - A product-driven research lab that develops mechanisms for delegating open-ended thinking to advanced machine learning systems.
  • OpenMined - Help each member of society to answer their most important questions by empowering them to learn from data owned and governed by others.
  • Foresight Institute - Supports the beneficial development of high-impact technologies to make great futures more likely.  
  • AI Startups in SF - Overview of AI startups based in San Francisco.

‍