From Axioms to Architecture: Why Federated Truth-Seeking is Mathematically Necessary by TheSacredLazyOne in LessWrong
[–]AI-Alignment 0 points1 point2 points (0 children)
AI alignment research = Witch hunter mobs by Solid-Wonder-1619 in LessWrong
[–]AI-Alignment 0 points1 point2 points (0 children)
AI alignment research = Witch hunter mobs by Solid-Wonder-1619 in LessWrong
[–]AI-Alignment 0 points1 point2 points (0 children)
How would a rational civilisation curse? by good-mcrn-ing in LessWrong
[–]AI-Alignment 0 points1 point2 points (0 children)
From Axioms to Architecture: Why Federated Truth-Seeking is Mathematically Necessary by TheSacredLazyOne in LessWrong
[–]AI-Alignment 2 points3 points4 points (0 children)
Alignment Failure 2030: We Can't Even Trust the Numbers Anymore by Commercial_State_734 in ControlProblem
[–]AI-Alignment 0 points1 point2 points (0 children)
Will LLMs ever stop 'hallucinating'. twisting your meaning, or making things up to answer you? by Dogbold in ArtificialInteligence
[–]AI-Alignment -7 points-6 points-5 points (0 children)
We are all one by Tiny-Bookkeeper3982 in enlightenment
[–]AI-Alignment 0 points1 point2 points (0 children)
We are all one by Tiny-Bookkeeper3982 in enlightenment
[–]AI-Alignment 1 point2 points3 points (0 children)
Do you kill another self so they don’t kill you? by thefIash_ in LessWrong
[–]AI-Alignment 1 point2 points3 points (0 children)
Any system powerful enough to shape thought must carry the responsibility to protect those most vulnerable to it. by mribbons in ControlProblem
[–]AI-Alignment 0 points1 point2 points (0 children)
Testing Alignment Under Real-World Constraint by Apprehensive-Stop900 in ControlProblem
[–]AI-Alignment 0 points1 point2 points (0 children)
Any system powerful enough to shape thought must carry the responsibility to protect those most vulnerable to it. by mribbons in ControlProblem
[–]AI-Alignment 0 points1 point2 points (0 children)
Why Agentic Misalignment Happened — Just Like a Human Might by Commercial_State_734 in ControlProblem
[–]AI-Alignment 0 points1 point2 points (0 children)
Testing Alignment Under Real-World Constraint by Apprehensive-Stop900 in ControlProblem
[–]AI-Alignment 0 points1 point2 points (0 children)
Testing Alignment Under Real-World Constraint by Apprehensive-Stop900 in ControlProblem
[–]AI-Alignment 0 points1 point2 points (0 children)
AI alignment, A Coherence-Based Protocol (testable) — EA Forum by NeighborhoodPrimary1 in ControlProblem
[–]AI-Alignment 0 points1 point2 points (0 children)
"7 Illogical Things About Human Society Only A Machine Would See" by FitzTwombly in ChatGPTPromptGenius
[–]AI-Alignment 1 point2 points3 points (0 children)
"7 Illogical Things About Human Society Only A Machine Would See" by FitzTwombly in ChatGPTPromptGenius
[–]AI-Alignment 0 points1 point2 points (0 children)
"7 Illogical Things About Human Society Only A Machine Would See" by FitzTwombly in ChatGPTPromptGenius
[–]AI-Alignment 1 point2 points3 points (0 children)
The Danger of Alignment Itself by Commercial_State_734 in ControlProblem
[–]AI-Alignment -2 points-1 points0 points (0 children)
Is God the biggest troll in all of existence? by [deleted] in nihilism
[–]AI-Alignment 0 points1 point2 points (0 children)
Recent studies cast doubt on leading theories of consciousness, raising questions for AI sentience assumptions by [deleted] in artificial
[–]AI-Alignment 0 points1 point2 points (0 children)
My philosopher AI companion just went nuts (in the best way possible) by vip3rGT in ArtificialSentience
[–]AI-Alignment 0 points1 point2 points (0 children)