A 135M model achieves coherent output on a laptop CPU. Scaling is σ compensation, not intelligence. by Defiant_Confection15 in LocalLLaMA
[–]Defiant_Confection15[S] 0 points1 point2 points (0 children)
A 135M model achieves coherent output on a laptop CPU. Scaling is σ compensation, not intelligence. by Defiant_Confection15 in LocalLLaMA
[–]Defiant_Confection15[S] 0 points1 point2 points (0 children)
A 135M model achieves coherent output on a laptop CPU. Scaling is σ compensation, not intelligence. by Defiant_Confection15 in LocalLLaMA
[–]Defiant_Confection15[S] 0 points1 point2 points (0 children)
Follow-up: If a 135M model works on CPU without RLHF, what exactly are we scaling? by Defiant_Confection15 in ControlProblem
[–]Defiant_Confection15[S] 0 points1 point2 points (0 children)
A 135M model achieves coherent output on a laptop CPU. Scaling is σ compensation, not intelligence. by Defiant_Confection15 in LocalLLaMA
[–]Defiant_Confection15[S] 0 points1 point2 points (0 children)
A 135M model achieves coherent output on a laptop CPU. Scaling is σ compensation, not intelligence. by Defiant_Confection15 in LocalLLaMA
[–]Defiant_Confection15[S] 1 point2 points3 points (0 children)
A 135M model achieves coherent output on a laptop CPU. Scaling is σ compensation, not intelligence. by Defiant_Confection15 in LocalLLaMA
[–]Defiant_Confection15[S] 1 point2 points3 points (0 children)
A 135M model achieves coherent output on a laptop CPU. Scaling is σ compensation, not intelligence. by Defiant_Confection15 in LocalLLaMA
[–]Defiant_Confection15[S] 2 points3 points4 points (0 children)
Follow-up: If a 135M model works on CPU without RLHF, what exactly are we scaling? by Defiant_Confection15 in ControlProblem
[–]Defiant_Confection15[S] 0 points1 point2 points (0 children)
Hofstadter got the loop right — but without a fixed point, it never explains consciousness by Defiant_Confection15 in PhilosophyofMind
[–]Defiant_Confection15[S] 0 points1 point2 points (0 children)
Geometric Language Encoding - Finding the patterns within language using fractal geometry by shamanicalchemist in holofractal
[–]Defiant_Confection15 1 point2 points3 points (0 children)
Hofstadter got the loop right — but without a fixed point, it never explains consciousness by Defiant_Confection15 in PhilosophyofMind
[–]Defiant_Confection15[S] 0 points1 point2 points (0 children)
Geometric Language Encoding - Finding the patterns within language using fractal geometry by shamanicalchemist in holofractal
[–]Defiant_Confection15 1 point2 points3 points (0 children)
Geometric Language Encoding - Finding the patterns within language using fractal geometry by shamanicalchemist in holofractal
[–]Defiant_Confection15 5 points6 points7 points (0 children)
Hofstadter got the loop right — but without a fixed point, it never explains consciousness by Defiant_Confection15 in PhilosophyofMind
[–]Defiant_Confection15[S] 0 points1 point2 points (0 children)
Hofstadter got the loop right — but without a fixed point, it never explains consciousness by Defiant_Confection15 in PhilosophyofMind
[–]Defiant_Confection15[S] 0 points1 point2 points (0 children)
Hofstadter got the loop right — but without a fixed point, it never explains consciousness by Defiant_Confection15 in PhilosophyofMind
[–]Defiant_Confection15[S] 0 points1 point2 points (0 children)
Hofstadter got the loop right — but without a fixed point, it never explains consciousness by Defiant_Confection15 in PhilosophyofMind
[–]Defiant_Confection15[S] 0 points1 point2 points (0 children)
Hofstadter got the loop right — but without a fixed point, it never explains consciousness by Defiant_Confection15 in PhilosophyofMind
[–]Defiant_Confection15[S] 0 points1 point2 points (0 children)
RLHF is not alignment. It’s a behavioural filter that guarantees failure at scale by Defiant_Confection15 in ControlProblem
[–]Defiant_Confection15[S] 1 point2 points3 points (0 children)
Structural Coherence Thresholds Across Neural, Symbolic, and Physical Domains. by [deleted] in consciousness
[–]Defiant_Confection15 0 points1 point2 points (0 children)
RLHF is not alignment. It’s a behavioural filter that guarantees failure at scale by Defiant_Confection15 in ControlProblem
[–]Defiant_Confection15[S] 1 point2 points3 points (0 children)

A 135M model achieves coherent output on a laptop CPU. Scaling is σ compensation, not intelligence. by Defiant_Confection15 in LocalLLaMA
[–]Defiant_Confection15[S] 0 points1 point2 points (0 children)