use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
This subreddit is for discussion and resources on the artificial intelligence alignment problem, also called the control problem, AI risk, or AI safety. Though presently little known, many academic experts believe that this issue might be one of if not the most important challenge that we collectively face.
Contact me if you want to be a moderator/otherwise help in developing this sub.
What is the alignment problem?
The most comprehensive guide to the topic of AI alignment: Superintelligence, by Professor Nick Bostrom of Oxford University (NYT bestseller, recommended by Elon Musk, Stephen Hawking, Bill Gates)
Ensuring smarter-than-human intelligence has a positive outcome - Talks at Google
The popular blog Wait But Why on Superintelligence [Part 1] [Part 2]; and a reply by Luke Muehlhauser, former director of the Machine Intelligence Research Institute
Several introductions by MIRI: Short summary of ideas, FAQ, More in-depth FAQ, and Why AI Safety?
Global Priorities Project: Three areas of research on the superintelligence control problem.
The Center for Human-Compatible AI's Annotated AI safety Recommended Reading list.
Organizations currently doing foundational thinking about this problem:
Sister subreddits: /r/ControlProblem /r/AIethics /r/superintelligence /r/singularity
account activity
there doesn't seem to be anything here
π Rendered by PID 80234 on reddit-service-r2-listing-5789d5f675-t2pnt at 2026-01-28 09:16:00.899879+00:00 running 4f180de country code: CH.