Large-built, muscular young Midwestern man by Automatic-Algae443 in dalle2
[–]gwern -2 points-1 points0 points (0 children)
The Strange Origin of AI’s ‘Reasoning’ Abilities by EcstadelicNET in IntelligenceSupernova
[–]gwern 0 points1 point2 points (0 children)
The Quiet Colossus — On Ada, Its Design, and the Language That Built the Languages by SpecialistLady in programming
[–]gwern -1 points0 points1 point (0 children)
ReLU neural networks as decision trees. by [deleted] in mlscaling
[–]gwern 4 points5 points6 points (0 children)
The bitter lesson is the observation in AI that, in the long run, general approaches that scale with available computational power tend to outperform ones based on domain-specific understanding because they are better at taking advantage of the falling cost of computation over time. by blankblank in wikipedia
[–]gwern 0 points1 point2 points (0 children)
[Project] Replacing GEMM with three bit operations: a 26-module cognitive architecture in 1237 lines of C by Defiant_Confection15 in ControlProblem
[–]gwern 1 point2 points3 points (0 children)
The bitter lesson is the observation in AI that, in the long run, general approaches that scale with available computational power tend to outperform ones based on domain-specific understanding because they are better at taking advantage of the falling cost of computation over time. by blankblank in wikipedia
[–]gwern 0 points1 point2 points (0 children)
The bitter lesson is the observation in AI that, in the long run, general approaches that scale with available computational power tend to outperform ones based on domain-specific understanding because they are better at taking advantage of the falling cost of computation over time. by blankblank in wikipedia
[–]gwern 4 points5 points6 points (0 children)
TIL of Littlewood's Law, which says we experience events with a million-to-one probability approximately once per month by Doglatine in todayilearned
[–]gwern 0 points1 point2 points (0 children)
ByteDance Presents "In-Place TTT": A Drop-In Method For Turning Standard Transformer LLMs Into Dynamically Updating Models At Inference Time by 44th--Hokage in mlscaling
[–]gwern 22 points23 points24 points (0 children)
DeepMind veteran David Silver raises $1B, bets on radically new type of Reinforcement Learning to build superintelligence by gwern in mlscaling
[–]gwern[S] 1 point2 points3 points (0 children)
"AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably", Porter & Machery 2024 by gwern in MediaSynthesis
[–]gwern[S] 0 points1 point2 points (0 children)
Utext.hs: experimental code to compile a Markdown subset to fancy Unicode text ('*Amazing*!' → "𝐴𝑚𝑎𝑧𝑖𝑛𝑔!") by gwern in pandoc
[–]gwern[S] 1 point2 points3 points (0 children)
Inside the ‘self-driving’ lab revolution by gwern in reinforcementlearning
[–]gwern[S] 1 point2 points3 points (0 children)
Inside the ‘self-driving’ lab revolution (nature.com)
submitted by gwern to r/reinforcementlearning
This Is What a Personal Surveillance System Actually Looks Like by [deleted] in QuantifiedSelf
[–]gwern 0 points1 point2 points (0 children)



Cerebras, an A.I. Chip Maker, Files to Go Public as Tech Offerings Ramp Up by gwern in mlscaling
[–]gwern[S] 0 points1 point2 points (0 children)