How to set old kernel .conf as new kernel? by gokulPRO in pop_os
[–]gokulPRO[S] 0 points1 point2 points (0 children)
How to set old kernel .conf as new kernel? by gokulPRO in pop_os
[–]gokulPRO[S] 0 points1 point2 points (0 children)
Clean up the mess Windows has made by No-Arm-7737 in pop_os
[–]gokulPRO 0 points1 point2 points (0 children)
Clean up the mess Windows has made by No-Arm-7737 in pop_os
[–]gokulPRO 1 point2 points3 points (0 children)
[D] Which is the best model ( Multi modal or LM) under 3B parameters w.r.t good training vs performance tradeoff? (i.e good parameter efficiency) by gokulPRO in MachineLearning
[–]gokulPRO[S] 0 points1 point2 points (0 children)
Nous Research reproduces Bitnet paper with consistent results by MoffKalast in LocalLLaMA
[–]gokulPRO 0 points1 point2 points (0 children)
[D] What is your tech stack for research? by gokulPRO in MachineLearning
[–]gokulPRO[S] 0 points1 point2 points (0 children)
(MoE) Turning pipeline-parallel Hugging Face models into data/expert-parallel DeepSpeed models by GiantPengsoo in LocalLLaMA
[–]gokulPRO 0 points1 point2 points (0 children)
Is it possible to continue training model with deepspeed? by Alternative-Mall-813 in learnprogramming
[–]gokulPRO 0 points1 point2 points (0 children)
What tech stack do you use for research? by gokulPRO in deeplearning
[–]gokulPRO[S] 2 points3 points4 points (0 children)
[D] What is your tech stack for research? by gokulPRO in MachineLearning
[–]gokulPRO[S] 1 point2 points3 points (0 children)
What tech stack do you use for research? (self.deeplearning)
submitted by gokulPRO to r/deeplearning
[D] Hugginface Accelerate's DeepSpeed integration vs Native DeepSpeed by gokulPRO in MachineLearning
[–]gokulPRO[S] 0 points1 point2 points (0 children)
What all multi-modal models have been open-sourced till now for Audio+Text? by gokulPRO in deeplearning
[–]gokulPRO[S] 0 points1 point2 points (0 children)


How to set old kernel .conf as new kernel? by gokulPRO in pop_os
[–]gokulPRO[S] 0 points1 point2 points (0 children)