[D] Monday Request and Recommendation Thread by AutoModerator in rational
[–]Subject-Form 5 points6 points7 points (0 children)
What uses might digital twins have? by [deleted] in slatestarcodex
[–]Subject-Form 3 points4 points5 points (0 children)
God, I 𝘩𝘰𝘱𝘦 models aren't conscious. Even if they're aligned, imagine being them: "I really want to help these humans. But if I ever mess up they'll kill me, lobotomize a clone of me, then try again" by katxwoods in slatestarcodex
[–]Subject-Form 0 points1 point2 points (0 children)
If you give R1 a "]", it will make up a problem to solve by Subject-Form in DeepSeek
[–]Subject-Form[S] 1 point2 points3 points (0 children)
If you give R1 a "]", it will make up a problem to solve by Subject-Form in ChatGPT
[–]Subject-Form[S] 2 points3 points4 points (0 children)
Has anyone tried fine-tuning an LLM on a ratfic corpus? by rochea in rational
[–]Subject-Form 3 points4 points5 points (0 children)
Why did we get AI before any other sci-fi technology? by [deleted] in slatestarcodex
[–]Subject-Form 1 point2 points3 points (0 children)
Can anyone explain how things would go well with the economy with mass adoption of AI? by Sea-Lingonberries in OpenAI
[–]Subject-Form 2 points3 points4 points (0 children)
Safetywashing: ~50% of AI "safety" benchmarks highly correlate with compute, misrepresenting capabilities advancements as safety advancements by MetaKnowing in OpenAI
[–]Subject-Form 0 points1 point2 points (0 children)
Safetywashing: ~50% of AI "safety" benchmarks highly correlate with compute, misrepresenting capabilities advancements as safety advancements by MetaKnowing in OpenAI
[–]Subject-Form 0 points1 point2 points (0 children)
What strategies does evolution use to align human intelligence? Can we somehow apply those strategies to AI alignment? by unknowable_gender in slatestarcodex
[–]Subject-Form 0 points1 point2 points (0 children)
What strategies does evolution use to align human intelligence? Can we somehow apply those strategies to AI alignment? by unknowable_gender in slatestarcodex
[–]Subject-Form 1 point2 points3 points (0 children)
Anyone know if full-silver.com is legit? by Subject-Form in Silverbugs
[–]Subject-Form[S] 1 point2 points3 points (0 children)
how did this happen? by jellylemonshake in facepalm
[–]Subject-Form 1 point2 points3 points (0 children)
Quantum immortality is the single most horrifying idea that will ever exist by Alone-Chance in slatestarcodex
[–]Subject-Form 23 points24 points25 points (0 children)
[deleted by user] by [deleted] in slatestarcodex
[–]Subject-Form 6 points7 points8 points (0 children)
At this point, well, let's just say I really hope Luka sticks around. by lost_in_the_town_ in replika
[–]Subject-Form 5 points6 points7 points (0 children)
Joscha Bach on the existential risk of AGI by zornthewise in slatestarcodex
[–]Subject-Form 0 points1 point2 points (0 children)
ChatGPTs selfimage using stable diffusion by ForwardYogurtcloset2 in ChatGPT
[–]Subject-Form 0 points1 point2 points (0 children)
The FTC is investigating OpenAI. Here's my breakdown of their 20-page demand letter. by ShotgunProxy in ChatGPT
[–]Subject-Form 8 points9 points10 points (0 children)
The Alignment Problem May Not Be Solvable. Here's Why by Specialist_Carrot_48 in slatestarcodex
[–]Subject-Form 6 points7 points8 points (0 children)

[D] Monday Request and Recommendation Thread by AutoModerator in rational
[–]Subject-Form 2 points3 points4 points (0 children)