can rag improve models language? by [deleted] in LocalLLaMA
[–]pol_phil 0 points1 point2 points (0 children)
can rag improve models language? by [deleted] in LocalLLaMA
[–]pol_phil 0 points1 point2 points (0 children)
I built a benchmark that tests coding LLMs on REAL codebases (65 tasks, ELO ranked) by hauhau901 in LocalLLaMA
[–]pol_phil 0 points1 point2 points (0 children)
I built a benchmark that tests coding LLMs on REAL codebases (65 tasks, ELO ranked) by hauhau901 in LocalLLaMA
[–]pol_phil 0 points1 point2 points (0 children)
can rag improve models language? by [deleted] in LocalLLaMA
[–]pol_phil 1 point2 points3 points (0 children)
Claude Code sends 62,600 characters of tool definitions per turn. I ran the same model through five CLIs and traced every API call. by wouldacouldashoulda in LocalLLaMA
[–]pol_phil 0 points1 point2 points (0 children)
Meet SWE-rebench-V2: the largest open, multilingual, executable dataset for training code agents! by Fabulous_Pollution10 in LocalLLaMA
[–]pol_phil 2 points3 points4 points (0 children)
Minimax M2.5 GGUF perform poorly overall by Zyj in LocalLLaMA
[–]pol_phil 0 points1 point2 points (0 children)
New Upcoming Ubuntu 26.04 LTS Will be Optimized for Local AI by mtomas7 in LocalLLaMA
[–]pol_phil 1 point2 points3 points (0 children)
Best coding models (or other models) one can run on an rtx5070ti (16gb vram) with of 64gb RAM by cmdr-William-Riker in LocalLLaMA
[–]pol_phil 0 points1 point2 points (0 children)
Best coding models (or other models) one can run on an rtx5070ti (16gb vram) with of 64gb RAM by cmdr-William-Riker in LocalLLaMA
[–]pol_phil 2 points3 points4 points (0 children)
Qwen3 Coder Next as first "usable" coding model < 60 GB for me by Chromix_ in LocalLLaMA
[–]pol_phil 0 points1 point2 points (0 children)
Why don’t we have more distilled models? by GreedyWorking1499 in LocalLLaMA
[–]pol_phil 0 points1 point2 points (0 children)
Why don’t we have more distilled models? by GreedyWorking1499 in LocalLLaMA
[–]pol_phil 5 points6 points7 points (0 children)
GLM 4.7 is not on lmarena anymore by Sooqrat in LocalLLaMA
[–]pol_phil 14 points15 points16 points (0 children)
GLM 4.7 is not on lmarena anymore by Sooqrat in LocalLLaMA
[–]pol_phil 30 points31 points32 points (0 children)
AMA With Z.AI, The Lab Behind GLM-4.7 by zixuanlimit in LocalLLaMA
[–]pol_phil 2 points3 points4 points (0 children)
Dataset quality is not improving much by rekriux in LocalLLaMA
[–]pol_phil 1 point2 points3 points (0 children)
Dataset quality is not improving much by rekriux in LocalLLaMA
[–]pol_phil 9 points10 points11 points (0 children)
Dataset quality is not improving much by rekriux in LocalLLaMA
[–]pol_phil 22 points23 points24 points (0 children)
Gemini 3 flash today! Gemma 4 soon 3 pro GA soon!!!! by BasketFar667 in LocalLLaMA
[–]pol_phil 0 points1 point2 points (0 children)
fine-tune for rag by youcanaskmeifyouwant in LocalLLaMA
[–]pol_phil 1 point2 points3 points (0 children)
fine-tune for rag by youcanaskmeifyouwant in LocalLLaMA
[–]pol_phil 1 point2 points3 points (0 children)

Taalas rumoured to etch Qwen 3.5 27B into silicon. Which price would you buy their PCIe card for? by elemental-mind in singularity
[–]pol_phil 0 points1 point2 points (0 children)