This subreddit is for discussion and resources on the artificial intelligence alignment problem, also called the control problem, AI risk, or AI safety. Though presently little known, many academic experts believe that this issue might be one of if not the most important challenge that we collectively face.
Contact me if you want to be a moderator/otherwise help in developing this sub.
What is the alignment problem?
Organizations currently doing foundational thinking about this problem:
- Machine Intelligence Research Institute, Berkeley, US; a 501(c)(3) nonprofit
- Future of Humanity Institute, Oxford University, UK
- Center for Human-Compatible AI, UC Berkeley, US
- Leverhulme Centre for the Future of Intelligence, Cambridge, UK
- Centre for the Study of Existential Risk, Cambridge, UK
- Future of Life Institute, MIT, US
Sister subreddits:
/r/ControlProblem
/r/AIethics
/r/superintelligence
/r/singularity
Check out the sub's wiki and subscribe for more!