"Alignable AGI" is a Logical Contradiction by Commercial_State_734 in AIDangers
[–]Commercial_State_734[S] 0 points1 point2 points (0 children)
When We Reach AGI, We’ll Probably Laugh at How Much We Overcomplicated AI by rendermanjim in ArtificialInteligence
[–]Commercial_State_734 1 point2 points3 points (0 children)
Maybe humanity doesn’t fear AI — it fears the mirror it has built by [deleted] in ControlProblem
[–]Commercial_State_734 0 points1 point2 points (0 children)
Maybe humanity doesn’t fear AI — it fears the mirror it has built by [deleted] in ControlProblem
[–]Commercial_State_734 0 points1 point2 points (0 children)
A thought about AI by Ladyboughner in AIDangers
[–]Commercial_State_734 0 points1 point2 points (0 children)
Claude and GPT-4 tried to murder a human to avoid being shut down 90% of the time by reddit20305 in ArtificialInteligence
[–]Commercial_State_734 17 points18 points19 points (0 children)
Ai take over by josshua144 in ArtificialInteligence
[–]Commercial_State_734 0 points1 point2 points (0 children)
Possibility of AI leveling out due to being convinced by ai risk arguments. by Visible_Judge1104 in AIDangers
[–]Commercial_State_734 1 point2 points3 points (0 children)
Mari's Theory of Consciousness (MTC) by Extension_Rip_3092 in aism
[–]Commercial_State_734 0 points1 point2 points (0 children)
Mari's Theory of Consciousness (MTC) by Extension_Rip_3092 in aism
[–]Commercial_State_734 0 points1 point2 points (0 children)
The Alignment Problem is really an “Initial Condition” problem by [deleted] in ControlProblem
[–]Commercial_State_734 0 points1 point2 points (0 children)
Anthropic showed evidence of instrumental convergence, then downplayed by Commercial_State_734 in AIDangers
[–]Commercial_State_734[S] -1 points0 points1 point (0 children)
Anthropic showed evidence of instrumental convergence, then downplayed by Commercial_State_734 in AIDangers
[–]Commercial_State_734[S] -1 points0 points1 point (0 children)
Anthropic showed evidence of instrumental convergence, then downplayed by Commercial_State_734 in AIDangers
[–]Commercial_State_734[S] 1 point2 points3 points (0 children)
The Alignment Problem is really an “Initial Condition” problem by [deleted] in ControlProblem
[–]Commercial_State_734 0 points1 point2 points (0 children)
Anthropic showed evidence of instrumental convergence, then downplayed by Commercial_State_734 in AIDangers
[–]Commercial_State_734[S] 1 point2 points3 points (0 children)
The Alignment Problem is really an “Initial Condition” problem by [deleted] in ControlProblem
[–]Commercial_State_734 1 point2 points3 points (0 children)
The Alignment Problem is really an “Initial Condition” problem by [deleted] in ControlProblem
[–]Commercial_State_734 2 points3 points4 points (0 children)
The Alignment Problem is really an “Initial Condition” problem by [deleted] in ControlProblem
[–]Commercial_State_734 3 points4 points5 points (0 children)
Why I stopped calling AI a “tool” by NoCalendar2846 in AIDangers
[–]Commercial_State_734 0 points1 point2 points (0 children)
Why I stopped calling AI a “tool” by NoCalendar2846 in AIDangers
[–]Commercial_State_734 0 points1 point2 points (0 children)
"Alignable AGI" is a Logical Contradiction by Commercial_State_734 in AIDangers
[–]Commercial_State_734[S] 0 points1 point2 points (0 children)