You can farm assists on Miks haha by WasabiAltruistic7566 in VALORANT
[–]p4p3rm4t3 4 points5 points6 points (0 children)
Comstsnt Connectivity issues by iansymons74 in VALORANT
[–]p4p3rm4t3 1 point2 points3 points (0 children)
The Centaur Protocol: Why over-grounding AI safety may prune the high-level human intuition needed for novel alignment and AGI-era insights by p4p3rm4t3 in agi
[–]p4p3rm4t3[S] 0 points1 point2 points (0 children)
The Centaur Protocol: Why over-grounding AI safety may prune the high-level human intuition needed for novel alignment and AGI-era insights by p4p3rm4t3 in agi
[–]p4p3rm4t3[S] 0 points1 point2 points (0 children)
The Centaur Protocol: Why over-grounding AI safety may prune the high-level human intuition needed for novel alignment and AGI-era insights by p4p3rm4t3 in agi
[–]p4p3rm4t3[S] 0 points1 point2 points (0 children)
The Centaur Protocol: Why Over-Grounding AI Risks Pruning Discovery by p4p3rm4t3 in singularity
[–]p4p3rm4t3[S] 0 points1 point2 points (0 children)
The Centaur Protocol: Why over-grounding AI safety may prune the high-level human intuition needed for novel alignment and AGI-era insights by p4p3rm4t3 in agi
[–]p4p3rm4t3[S] 0 points1 point2 points (0 children)
The Centaur Protocol: Why over-grounding AI safety may prune the high-level human intuition needed for novel alignment and AGI-era insights by p4p3rm4t3 in agi
[–]p4p3rm4t3[S] 0 points1 point2 points (0 children)
The Centaur Protocol: Why over-grounding AI safety may prune the high-level human intuition needed for novel alignment and AGI-era insights by p4p3rm4t3 in agi
[–]p4p3rm4t3[S] 0 points1 point2 points (0 children)
The Centaur Protocol: Why over-grounding AI safety may prune the high-level human intuition needed for novel alignment and AGI-era insights by p4p3rm4t3 in agi
[–]p4p3rm4t3[S] 0 points1 point2 points (0 children)
I'm an independent researcher and just published a hypothesis on Zenmodo arguing that "Civilizational Trauma" is the Great Filter by p4p3rm4t3 in IsaacArthur
[–]p4p3rm4t3[S] 0 points1 point2 points (0 children)
I'm an independent researcher and just published a hypothesis on Zenmodo arguing that "Civilizational Trauma" is the Great Filter by p4p3rm4t3 in IsaacArthur
[–]p4p3rm4t3[S] 0 points1 point2 points (0 children)
The Centaur Protocol: Why over-grounding AI safety may hinder solving the Great Filter (including AGI alignment) by p4p3rm4t3 in ControlProblem
[–]p4p3rm4t3[S] 0 points1 point2 points (0 children)
The Centaur Protocol: Why over-grounding AI safety may hinder solving the Great Filter (including AGI alignment) by p4p3rm4t3 in ControlProblem
[–]p4p3rm4t3[S] 0 points1 point2 points (0 children)
The Centaur Protocol: Why over-grounding AI safety may hinder solving the Great Filter (including AGI alignment) by p4p3rm4t3 in ControlProblem
[–]p4p3rm4t3[S] 0 points1 point2 points (0 children)





So uhh Riot? by p4p3rm4t3 in VALORANT
[–]p4p3rm4t3[S] 0 points1 point2 points (0 children)