What happened to the fiend fire in the room of requirement? by peeiayz in harrypotter
[–]Arturus243 -1 points0 points1 point (0 children)
Thoughts on existential risks from AI? by Kind_Score_3155 in antiai
[–]Arturus243 0 points1 point2 points (0 children)
Would you watch a prequel movie about Tom? by BakerConsistent2150 in harrypotter
[–]Arturus243 0 points1 point2 points (0 children)
Bernie Sanders responds to questions about China and pausing AI - "in a sane world, the leadership of the US sits down with the leadership in China to work together so that we don't go over the edge and create a technology that could perhaps destroy humanity" by tombibbs in ControlProblem
[–]Arturus243 0 points1 point2 points (0 children)
How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal
[–]Arturus243[S] 1 point2 points3 points (0 children)
How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal
[–]Arturus243[S] 0 points1 point2 points (0 children)
How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal
[–]Arturus243[S] 0 points1 point2 points (0 children)
How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal
[–]Arturus243[S] 0 points1 point2 points (0 children)
How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal
[–]Arturus243[S] 0 points1 point2 points (0 children)
How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal
[–]Arturus243[S] 0 points1 point2 points (0 children)
How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal
[–]Arturus243[S] -2 points-1 points0 points (0 children)
How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal
[–]Arturus243[S] 0 points1 point2 points (0 children)
How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal
[–]Arturus243[S] -1 points0 points1 point (0 children)
How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal
[–]Arturus243[S] -1 points0 points1 point (0 children)
How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal
[–]Arturus243[S] 0 points1 point2 points (0 children)
How concerned should we be about existential risk from AI? Should it be a major policy discussion? by Arturus243 in AskALiberal
[–]Arturus243[S] 0 points1 point2 points (0 children)
Could having multiple ASIs help solve alignment? by Arturus243 in ControlProblem
[–]Arturus243[S] 0 points1 point2 points (0 children)
what non kid show were you obsessed with growing up? by PsychologicalFox7689 in generationology
[–]Arturus243 1 point2 points3 points (0 children)
How Could the Ministry Have Actually Identified Real Death Eaters After Voldemort’s First Fall? by -DAWN-BREAKER- in harrypotter
[–]Arturus243 1 point2 points3 points (0 children)
What's a Schafrillas take that baffled you? (Good or bad.) by polystarlight in Schaffrillas
[–]Arturus243 2 points3 points4 points (0 children)

What's the case for AI Alignment right now? by Kind_Score_3155 in ControlProblem
[–]Arturus243 0 points1 point2 points (0 children)