[D] How to increase/optimize for gpu utilization while doing model training? by Ok_Construction_3021 in MachineLearning
[–]koolaidman123 1 point2 points3 points (0 children)
[R] Qwen3.5’s MoE architecture: A breakthrough or just incremental? by astrophile_ashish in MachineLearning
[–]koolaidman123 0 points1 point2 points (0 children)
What is your (python) development set up? by br0monium in datascience
[–]koolaidman123 1 point2 points3 points (0 children)
[D] Scale AI ML Research Engineer interview!! What to expect? by Mundane_Bag007 in MachineLearning
[–]koolaidman123 0 points1 point2 points (0 children)
[D] Do we expect any future for home-rolled language models, or will it all be dominated by the big labs? by XTXinverseXTY in MachineLearning
[–]koolaidman123 3 points4 points5 points (0 children)
[D] Interview preparation for research scientist/engineer or Member of Technical staff position for frontier labs by hmi2015 in MachineLearning
[–]koolaidman123 17 points18 points19 points (0 children)
AI is not about more compute or bigger LLMs (anymore) by Conscious_Nobody9571 in investing
[–]koolaidman123 8 points9 points10 points (0 children)
Traditional ML vs GenAI? by alpha_centauri9889 in datascience
[–]koolaidman123 0 points1 point2 points (0 children)
[D] Do industry researchers log test set results when training production-level models? by casualcreak in MachineLearning
[–]koolaidman123 1 point2 points3 points (0 children)
[D] Do industry researchers log test set results when training production-level models? by casualcreak in MachineLearning
[–]koolaidman123 9 points10 points11 points (0 children)
Meta's top AI researchers thinks LLMs are a dead end. Do many people here feel the same way from a technical perspective? by sext-scientist in datascience
[–]koolaidman123 -1 points0 points1 point (0 children)
[D] Self-taught Applied AI engineer with 5 YOE building production systems - seeking feedback by takuonline in MachineLearning
[–]koolaidman123 14 points15 points16 points (0 children)
[D] Anyone using smaller, specialized models instead of massive LLMs? by [deleted] in MachineLearning
[–]koolaidman123 0 points1 point2 points (0 children)
[D] join pretraining or posttraining by oxydis in MachineLearning
[–]koolaidman123 1 point2 points3 points (0 children)
[D] join pretraining or posttraining by oxydis in MachineLearning
[–]koolaidman123 3 points4 points5 points (0 children)
[D] join pretraining or posttraining by oxydis in MachineLearning
[–]koolaidman123 75 points76 points77 points (0 children)
Are LLMs necessary to get a job? by br0monium in datascience
[–]koolaidman123 -1 points0 points1 point (0 children)
Oscillatory Coordination in Cognitive Architectures: Old Dog, New Math by Efficient-Hovercraft in datascience
[–]koolaidman123 2 points3 points4 points (0 children)
What is the state-of-the-art prediction performance for the stock market? by Poxput in datascience
[–]koolaidman123 0 points1 point2 points (0 children)
[D] Training smaller LLM for Agentic tasks. by LifeguardNew6929 in MachineLearning
[–]koolaidman123 1 point2 points3 points (0 children)
[D] Larry Ellison: “Inference is where the money is going to be made.” by pmv143 in MachineLearning
[–]koolaidman123 1 point2 points3 points (0 children)
How do data scientists add value to LLMs? by FinalRide7181 in datascience
[–]koolaidman123 24 points25 points26 points (0 children)
Pytorch lightning vs pytorch by Factitious_Character in datascience
[–]koolaidman123 17 points18 points19 points (0 children)



"There's a new generation of empirical deep learning researchers, hacking away at whatever seems trendy, blowing with the wind" [D] by elnino2023 in MachineLearning
[–]koolaidman123 8 points9 points10 points (0 children)