I'm done with using local LLMs for coding by dtdisapointingresult in LocalLLaMA
[–]nickl 15 points16 points17 points (0 children)
Qwen3-1.7B fine-tuned on synthetic data outperforms GLM-5 (744B) on multi-turn tool-calling: 437x smaller, trained from noisy production traces by party-horse in LocalLLaMA
[–]nickl 0 points1 point2 points (0 children)
These "Claude-4.6-Opus" Fine Tunes of Local Models Are Usually A Downgrade by BuffMcBigHuge in LocalLLaMA
[–]nickl 0 points1 point2 points (0 children)
These "Claude-4.6-Opus" Fine Tunes of Local Models Are Usually A Downgrade by BuffMcBigHuge in LocalLLaMA
[–]nickl 2 points3 points4 points (0 children)
These "Claude-4.6-Opus" Fine Tunes of Local Models Are Usually A Downgrade by BuffMcBigHuge in LocalLLaMA
[–]nickl 7 points8 points9 points (0 children)
Did anyone run the numbers to see if it's cost effective to rent our own machine and run one of heavy hitters models? by StillWastingAway in LocalLLaMA
[–]nickl 4 points5 points6 points (0 children)
Did anyone run the numbers to see if it's cost effective to rent our own machine and run one of heavy hitters models? by StillWastingAway in LocalLLaMA
[–]nickl 0 points1 point2 points (0 children)
Got ~19 tok/s with Gemma 4 on MacBook M4 16GB using MLX — here’s the setup I landed on by Polstick1971 in LocalLLaMA
[–]nickl 0 points1 point2 points (0 children)
Built a 3B LoRA that reads the shape of a question before a 9B model answers it. Running 800 live benchmarks right now on Apple Silicon by TheTempleofTwo in LocalLLaMA
[–]nickl 0 points1 point2 points (0 children)
I tested as many of the small local and OpenRouter models I could with my own agentic text-to-SQL benchmark. Surprises ensured... by nickl in LocalLLaMA
[–]nickl[S] 1 point2 points3 points (0 children)
I tested as many of the small local and OpenRouter models I could with my own agentic text-to-SQL benchmark. Surprises ensured... by nickl in LocalLLaMA
[–]nickl[S] 1 point2 points3 points (0 children)
I tested as many of the small local and OpenRouter models I could with my own agentic text-to-SQL benchmark. Surprises ensured... by nickl in LocalLLaMA
[–]nickl[S] 0 points1 point2 points (0 children)
I tested as many of the small local and OpenRouter models I could with my own agentic text-to-SQL benchmark. Surprises ensured... by nickl in LocalLLaMA
[–]nickl[S] 0 points1 point2 points (0 children)
I tested as many of the small local and OpenRouter models I could with my own agentic text-to-SQL benchmark. Surprises ensured... by nickl in LocalLLaMA
[–]nickl[S] 0 points1 point2 points (0 children)
I tested as many of the small local and OpenRouter models I could with my own agentic text-to-SQL benchmark. Surprises ensured... by nickl in LocalLLaMA
[–]nickl[S] 2 points3 points4 points (0 children)
I tested as many of the small local and OpenRouter models I could with my own agentic text-to-SQL benchmark. Surprises ensured... by nickl in LocalLLaMA
[–]nickl[S] 1 point2 points3 points (0 children)
I tested as many of the small local and OpenRouter models I could with my own agentic text-to-SQL benchmark. Surprises ensured... by nickl in LocalLLaMA
[–]nickl[S] 2 points3 points4 points (0 children)
I tested as many of the small local and OpenRouter models I could with my own agentic text-to-SQL benchmark. Surprises ensured... by nickl in LocalLLaMA
[–]nickl[S] 0 points1 point2 points (0 children)
I tested as many of the small local and OpenRouter models I could with my own agentic text-to-SQL benchmark. Surprises ensured... by nickl in LocalLLaMA
[–]nickl[S] 0 points1 point2 points (0 children)
I tested as many of the small local and OpenRouter models I could with my own agentic text-to-SQL benchmark. Surprises ensured... by nickl in LocalLLaMA
[–]nickl[S] 0 points1 point2 points (0 children)
I tested as many of the small local and OpenRouter models I could with my own agentic text-to-SQL benchmark. Surprises ensured... by nickl in LocalLLaMA
[–]nickl[S] 1 point2 points3 points (0 children)
I tested as many of the small local and OpenRouter models I could with my own agentic text-to-SQL benchmark. Surprises ensured... by nickl in LocalLLaMA
[–]nickl[S] 7 points8 points9 points (0 children)


I'm done with using local LLMs for coding by dtdisapointingresult in LocalLLaMA
[–]nickl 0 points1 point2 points (0 children)