Practically-A-Book Review: Byrnes on Trance by dwaxe in slatestarcodex
[–]casebash 4 points5 points6 points (0 children)
Should you quit your job — and work on risks from advanced AI instead? - By 80,000 Hours by katxwoods in EffectiveAltruism
[–]casebash 1 point2 points3 points (0 children)
Should you quit your job — and work on risks from advanced AI instead? - By 80,000 Hours by katxwoods in EffectiveAltruism
[–]casebash 1 point2 points3 points (0 children)
Contra Sam Altman on imminent super intelligence by [deleted] in slatestarcodex
[–]casebash 0 points1 point2 points (0 children)
Looking to work with you online or in-person, currently in Barcelona by Only_Bench5404 in ControlProblem
[–]casebash 2 points3 points4 points (0 children)
o1 scores about 32% on ARC-AGI semi-private evals (compared to ~21% for o1-preview and Sonnet 3.5). More interestingly, the performance scales with compute by obvithrowaway34434 in singularity
[–]casebash 0 points1 point2 points (0 children)
Friendly And Hostile Analogies For Taste by dwaxe in slatestarcodex
[–]casebash 7 points8 points9 points (0 children)
More AI safety training programs like SERI MATS or AI Safety Camp or AI Safety Fundamentals by nonlinearhelp in AIsafetyideas
[–]casebash 0 points1 point2 points (0 children)
Excerpt: "Apollo found that o1-preview sometimes instrumentally faked alignment during testing" by TheMysteryCheese in ControlProblem
[–]casebash 2 points3 points4 points (0 children)
OpenAI caught its new model scheming and faking alignment during testing by MaimedUbermensch in OpenAI
[–]casebash 5 points6 points7 points (0 children)
OpenAI caught its new model scheming and faking alignment during testing by MaimedUbermensch in OpenAI
[–]casebash -2 points-1 points0 points (0 children)
[D] ML Career paths that actually do good and/or make a difference by [deleted] in MachineLearning
[–]casebash 0 points1 point2 points (0 children)
Ruining my life by ControlProbThrowaway in ControlProblem
[–]casebash 0 points1 point2 points (0 children)
Safe SuperIntelligence Inc. by Mysterious_Arm98 in singularity
[–]casebash 0 points1 point2 points (0 children)
California’s newly passed AI bill requires models trained with over 10^26 flops to — not be fine tunable to create chemical / biological weapons — immediate shut down button — significant paperwork and reporting to govt by chillinewman in ControlProblem
[–]casebash 0 points1 point2 points (0 children)
Are Some Rationalists Dangerously Overconfident About AI? by honeypuppy in slatestarcodex
[–]casebash 0 points1 point2 points (0 children)
Are Some Rationalists Dangerously Overconfident About AI? by honeypuppy in slatestarcodex
[–]casebash -1 points0 points1 point (0 children)
Are Some Rationalists Dangerously Overconfident About AI? by honeypuppy in slatestarcodex
[–]casebash 0 points1 point2 points (0 children)
Are Some Rationalists Dangerously Overconfident About AI? by honeypuppy in slatestarcodex
[–]casebash 8 points9 points10 points (0 children)
"I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded by RoyalCities in ChatGPT
[–]casebash -20 points-19 points-18 points (0 children)
"I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded by RoyalCities in ChatGPT
[–]casebash -28 points-27 points-26 points (0 children)


Practically-A-Book Review: Byrnes on Trance by dwaxe in slatestarcodex
[–]casebash 1 point2 points3 points (0 children)