What is AI safety?
AI safety is the study of how to ensure increasingly capable artificial intelligence systems remain aligned with human values and beneficial to society. Our focus spans technical research, ethics, and policy — from preventing catastrophic failures in advanced systems to understanding more gradual social and economic impacts.
We discuss topics including, but not limited to: the threat of superintelligence, the economic impacts of AGI, timelines to AGI capabilities, the AI alignment problem, moral problems of machine consciousness, gradual disempowerment, LLM psychosis, global AGI race dynamics, international diplomacy, theoretical limits of intelligence, and simulation theory.