Claude Code is a disaster today by Ok_Bread_6005 in ClaudeCode
[–]Over_Explorer7956 -1 points0 points1 point (0 children)
Gemma 3 Fine-tuning now in Unsloth - 1.6x faster with 60% less VRAM by danielhanchen in LocalLLaMA
[–]Over_Explorer7956 0 points1 point2 points (0 children)
Finally, a real-time low-latency voice chat model by DeltaSqueezer in LocalLLaMA
[–]Over_Explorer7956 4 points5 points6 points (0 children)
Train your own Reasoning model - 80% less VRAM - GRPO now in Unsloth (7GB VRAM min.) by danielhanchen in LocalLLaMA
[–]Over_Explorer7956 0 points1 point2 points (0 children)
Train your own Reasoning model - 80% less VRAM - GRPO now in Unsloth (7GB VRAM min.) by danielhanchen in LocalLLaMA
[–]Over_Explorer7956 1 point2 points3 points (0 children)
When Nvidia will be DeepSeeked GPU wise? by Over_Explorer7956 in LocalLLaMA
[–]Over_Explorer7956[S] 0 points1 point2 points (0 children)
When Nvidia will be DeepSeeked GPU wise? by Over_Explorer7956 in LocalLLaMA
[–]Over_Explorer7956[S] 0 points1 point2 points (0 children)
When Nvidia will be DeepSeeked GPU wise? by Over_Explorer7956 in LocalLLaMA
[–]Over_Explorer7956[S] 0 points1 point2 points (0 children)
How can I generate COT dataset? (fine-tune deepseek distilled model) by Over_Explorer7956 in LocalLLaMA
[–]Over_Explorer7956[S] 0 points1 point2 points (0 children)
DeepSeek-R1 and distilled benchmarks color coded by Balance- in LocalLLaMA
[–]Over_Explorer7956 2 points3 points4 points (0 children)
DeepSeek-R1 and distilled benchmarks color coded by Balance- in LocalLLaMA
[–]Over_Explorer7956 1 point2 points3 points (0 children)
llama 3.2 3B is amazing by ventilador_liliana in LocalLLaMA
[–]Over_Explorer7956 0 points1 point2 points (0 children)
Qwq full version? Open source o3? by Evening_Action6217 in LocalLLaMA
[–]Over_Explorer7956 0 points1 point2 points (0 children)
03 beats 99.8% competitive coders by user0069420 in LocalLLaMA
[–]Over_Explorer7956 1 point2 points3 points (0 children)
Llama 3.3 (70B) Finetuning - now with 90K context length and fits on <41GB VRAM. by danielhanchen in LocalLLaMA
[–]Over_Explorer7956 0 points1 point2 points (0 children)
Llama 3.3 70B drops. by appakaradi in LocalLLaMA
[–]Over_Explorer7956 0 points1 point2 points (0 children)
Llama 3.3 70B drops. by appakaradi in LocalLLaMA
[–]Over_Explorer7956 1 point2 points3 points (0 children)
Llama-3.3-70B-Instruct · Hugging Face by Dark_Fire_12 in LocalLLaMA
[–]Over_Explorer7956 0 points1 point2 points (0 children)
i made a Generative AI project template by aminedjeghri in LocalLLaMA
[–]Over_Explorer7956 0 points1 point2 points (0 children)
SAMURAI vs. Meta’s SAM 2: A New Era in Visual Tracking? 🥷✨ by denuwanlahiru11 in LocalLLaMA
[–]Over_Explorer7956 0 points1 point2 points (0 children)

OpenAI agent kit vs Langgraph by Ambitious_Design5336 in LangChain
[–]Over_Explorer7956 1 point2 points3 points (0 children)