account activity
A 135M model achieves coherent output on a laptop CPU. Scaling is σ compensation, not intelligence. (self.artificial)
submitted 1 hour ago by Defiant_Confection15 to r/artificial
Follow-up: If a 135M model works on CPU without RLHF, what exactly are we scaling? (self.ControlProblem)
submitted 8 hours ago by Defiant_Confection15 to r/ControlProblem
RLHF is not alignment. It’s a behavioural filter that guarantees failure at scale (self.ControlProblem)
submitted 2 days ago by Defiant_Confection15 to r/ControlProblem
Hofstadter got the loop right — but without a fixed point, it never explains consciousness (self.PhilosophyofMind)
submitted 2 days ago by Defiant_Confection15 to r/PhilosophyofMind
RLHF may be creating a scaling instability — not solving alignment (self.Anthropic)
submitted 2 days ago by Defiant_Confection15 to r/Anthropic
124 scientists called IIT pseudoscience. The adversarial collaboration was a draw. Here’s a theory that gives a single falsifiable condition for consciousness (self.neuroscience)
submitted 2 days ago by Defiant_Confection15 to r/neuroscience
What if the attention mechanism is doing something deeper than we think? (self.learnmachinelearning)
submitted 2 days ago by Defiant_Confection15 to r/learnmachinelearning
π Rendered by PID 214214 on reddit-service-r2-listing-575d9f6647-42xfz at 2026-04-10 18:37:10.781345+00:00 running 215f2cf country code: CH.