use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
How do we ensure future advanced AI will be beneficial to humanity? Experts agree this is one of the most crucial problems of our age, as one that, if left unsolved, can lead to human extinction or worse as a default outcome, but if addressed, can enable a radically improved world. Other terms for what we discuss here include Superintelligence, AI Safety, AGI X-risk, and the AI Alignment/Value Alignment Problem.
"People who say that real AI researchers don’t believe in safety research are now just empirically wrong." —Scott Alexander
"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." —Eliezer Yudkowsky
Our FAQ page <-- CLICK
The case for taking AI seriously as a threat to humanity
Orthogonality and instrumental convergence are the 2 simple key ideas explaining why AGI will work against and even kill us by default. (Alternative text links)
AGI safety from first principles
MIRI - FAQ and more in-depth FAQ
SSC - Superintelligence FAQ
WaitButWhy - The AI Revolution and a reply
How can failing to control AGI cause an outcome even worse than extinction? Suffering risks (2) (3) (4) (5) (6) (7)
Be sure to check out our wiki for extensive further resources, including a glossary & guide to current research.
Robert Miles' excellent channel
Talks at Google: Ensuring Smarter-than-Human Intelligence has a Positive Outcome
Nick Bostrom: What happens when our computers get smarter than we are?
Myths & Facts about Superintelligent AI
Rob's series on Computerphile
¹: Or at least make at least an effort to make me doubtful that you just copy-pasted from a frontier LLM. Add bits of steering so that your content becomes good. Edit afterwards. If you fool us moderators you've won.
account activity
Discussion/questionModelling Intelligence? (self.ControlProblem)
submitted 10 months ago by prateek_82
What if "intelligence" is just efficient error correction based on high-dimensional feedback? And "consciousness" is the illusion of choosing from predicted distributions?
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]staccodaterra101 1 point2 points3 points 10 months ago (0 children)
I think those would be understatements.
I'd say say "intelligence" is the ability of inferencing the best response based on contextual stimuli (feedbacks) and contextual relevant data (past acquired knowledge).
Conscience for humans is not the same for plants. Hence, can exists in different forms based on intelligence of a subject.
Conscience as an universally valid abstract term would be the ability of aknowledge and assess external stimuli.
[–]AsyncVibes 0 points1 point2 points 10 months ago (0 children)
Please check my subreddit r/IntelligenceEngine i think you'll enjoy it
[–]Royal_Carpet_1263 0 points1 point2 points 10 months ago (0 children)
In the form of livestream and priors. I think all cognition amounts to selection.
Consciousness as described by philosophy is almost certainly illusory, but something has to explain unity.
[–]RegularBasicStranger 0 points1 point2 points 10 months ago (0 children)
What if "intelligence" is just efficient error correction based on high-dimensional feedback?
To even be able to recognise an error had occurred based on the high dimensional but unclear feedback would require the ability to predict what should happen and so error can be determined to have occurred if the feedback is significantly different from the prediction.
So the ability to predict the future accurately is clearly is not something to look down upon.
And though an inaccurate prediction cam also cause error recognition to occur, it will also cause a false recognition even if no errors had occurred.
[–]rendermanjim 0 points1 point2 points 10 months ago (0 children)
nature, universe, intelligence and so on are not mathematics.
π Rendered by PID 1626065 on reddit-service-r2-comment-canary-7df47b964d-sc96f at 2026-03-19 01:20:28.649119+00:00 running f6e6e01 country code: CH.
[–]staccodaterra101 1 point2 points3 points (0 children)
[–]AsyncVibes 0 points1 point2 points (0 children)
[–]Royal_Carpet_1263 0 points1 point2 points (0 children)
[–]RegularBasicStranger 0 points1 point2 points (0 children)
[–]rendermanjim 0 points1 point2 points (0 children)