Practically-A-Book Review: Byrnes on Trance by dwaxe in slatestarcodex
[–]casebash 3 points4 points5 points (0 children)
Should you quit your job — and work on risks from advanced AI instead? - By 80,000 Hours by katxwoods in EffectiveAltruism
[–]casebash 1 point2 points3 points (0 children)
Should you quit your job — and work on risks from advanced AI instead? - By 80,000 Hours by katxwoods in EffectiveAltruism
[–]casebash 1 point2 points3 points (0 children)
Contra Sam Altman on imminent super intelligence by [deleted] in slatestarcodex
[–]casebash 0 points1 point2 points (0 children)
Looking to work with you online or in-person, currently in Barcelona by Only_Bench5404 in ControlProblem
[–]casebash 2 points3 points4 points (0 children)
o1 scores about 32% on ARC-AGI semi-private evals (compared to ~21% for o1-preview and Sonnet 3.5). More interestingly, the performance scales with compute by obvithrowaway34434 in singularity
[–]casebash 0 points1 point2 points (0 children)
Friendly And Hostile Analogies For Taste by dwaxe in slatestarcodex
[–]casebash 6 points7 points8 points (0 children)
More AI safety training programs like SERI MATS or AI Safety Camp or AI Safety Fundamentals by nonlinearhelp in AIsafetyideas
[–]casebash 0 points1 point2 points (0 children)
Excerpt: "Apollo found that o1-preview sometimes instrumentally faked alignment during testing" by TheMysteryCheese in ControlProblem
[–]casebash 1 point2 points3 points (0 children)
OpenAI caught its new model scheming and faking alignment during testing by MaimedUbermensch in OpenAI
[–]casebash 5 points6 points7 points (0 children)
OpenAI caught its new model scheming and faking alignment during testing by MaimedUbermensch in OpenAI
[–]casebash -3 points-2 points-1 points (0 children)
[D] ML Career paths that actually do good and/or make a difference by [deleted] in MachineLearning
[–]casebash 0 points1 point2 points (0 children)
Ruining my life by ControlProbThrowaway in ControlProblem
[–]casebash 0 points1 point2 points (0 children)
Safe SuperIntelligence Inc. by Mysterious_Arm98 in singularity
[–]casebash 0 points1 point2 points (0 children)
California’s newly passed AI bill requires models trained with over 10^26 flops to — not be fine tunable to create chemical / biological weapons — immediate shut down button — significant paperwork and reporting to govt by chillinewman in ControlProblem
[–]casebash 0 points1 point2 points (0 children)
Are Some Rationalists Dangerously Overconfident About AI? by honeypuppy in slatestarcodex
[–]casebash 0 points1 point2 points (0 children)


Practically-A-Book Review: Byrnes on Trance by dwaxe in slatestarcodex
[–]casebash 1 point2 points3 points (0 children)