SECTION

Value Alignment

Lorem Ipsum

Research Priorities for Robust AI - Stuart Russell, Max Tegmark, Daniel Dewey. Paper outlining priorities for making progress on robust AI.

Directions and Desiderata for AI alignment - Paul Christiano. His AI Alignment website is generally a good resource. Post outlining priorities for AI Alignment. 

The Landscape of AI Safety and Beneficence Research - Richard Mallah. Overview of the AI safety landscape.

Value Alignment Landscape - Lucas Perry. Overview of the sub-field of Value Alignment.

Coherent Extrapolated Volition - Eliezer Yudkowsky. Classic on how to align super intelligent systems with an idealized account of human values.

Objections to Coherent Extrapolated Volition - Lesswrong. Criticism of CEV.

Read more