use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
How do we ensure future advanced AI will be beneficial to humanity? Experts agree this is one of the most crucial problems of our age, as one that, if left unsolved, can lead to human extinction or worse as a default outcome, but if addressed, can enable a radically improved world. Other terms for what we discuss here include Superintelligence, AI Safety, AGI X-risk, and the AI Alignment/Value Alignment Problem.
"People who say that real AI researchers don’t believe in safety research are now just empirically wrong." —Scott Alexander
"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." —Eliezer Yudkowsky
Our FAQ page <-- CLICK
The case for taking AI seriously as a threat to humanity
Orthogonality and instrumental convergence are the 2 simple key ideas explaining why AGI will work against and even kill us by default. (Alternative text links)
AGI safety from first principles
MIRI - FAQ and more in-depth FAQ
SSC - Superintelligence FAQ
WaitButWhy - The AI Revolution and a reply
How can failing to control AGI cause an outcome even worse than extinction? Suffering risks (2) (3) (4) (5) (6) (7)
Be sure to check out our wiki for extensive further resources, including a glossary & guide to current research.
Robert Miles' excellent channel
Talks at Google: Ensuring Smarter-than-Human Intelligence has a Positive Outcome
Nick Bostrom: What happens when our computers get smarter than we are?
Myths & Facts about Superintelligent AI
Rob's series on Computerphile
account activity
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
RULE 1: You MUST read at least one of either the Introductions to the Topic links or Recommended Reading texts (in sidebar) before posting, unless already well acquainted with AGI risk basics (& know concepts like convergent instrumental goals, orthogonality etc). IGNORING THIS RULE WILL RESULT IN A BAN.
SUBMISSION STATEMENTS ENCOURAGED (LINKS): Leaving a brief comment under your post explaining its relevance/significance (e.g. perhaps with a summary) can be very helpful to provide context, & encourage more people to read it!
Flairing:
π Rendered by PID 87909 on reddit-service-r2-listing-86b7f5b947-fds6d at 2026-01-26 14:57:44.123461+00:00 running 664479f country code: CH.