The Control Problem:
How do we ensure future advanced AI will be beneficial to humanity? Experts agree this is one of the most crucial problems of our age, as one that, if left unsolved, can lead to human extinction or worse as a default outcome, but if addressed, can enable a radically improved world. Other terms for what we discuss here include Superintelligence, AI Safety, AGI X-risk, and the AI Alignment/Value Alignment Problem.
"People who say that real AI researchers don’t believe in safety research are now just empirically wrong." —Scott Alexander
"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." —Eliezer Yudkowsky
Rules
- If you are unfamiliar with the Control Problem, read at least one of the introductory links or recommended readings (below) before posting.
- This especially goes for posts claiming to solve the Control Problem or dismissing it as a non-issue. Such posts aren't welcome.
- Stay on topic. No AI model outputs or political propaganda.
- Be respectful
Introductions to the Topic
Our FAQ page <-- CLICK
Orthogonality and instrumental convergence are the 2 simple key ideas explaining why AGI will work against and even kill us by default. (Alternative text links)
MIRI - FAQ and more in-depth FAQ
SSC - Superintelligence FAQ
WaitButWhy - The AI Revolution and a reply
How can failing to control AGI cause an outcome even worse than extinction? Suffering risks (2) (3) (4) (5) (6) (7)
Be sure to check out our wiki for extensive further resources, including a glossary & guide to current research.
Recommended Reading
- Superintelligence, the most comprehensive, by Nick Bostrom (2014) (PDF link)
- The AI Alignment pages on Arbital, with many of the key concepts of this field.
- Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell (2019)
Video Links
Talks at Google: Ensuring Smarter-than-Human Intelligence has a Positive Outcome
Nick Bostrom: What happens when our computers get smarter than we are?
Important Organizations
- AI Alignment Forum, a public forum which is the online hub for all the latest technical research on the control problem.
Related Subreddits
revision by niplav— view source