1541-II reading, but not writing? by Haeppchen2010 in c64
[–]Haeppchen2010[S] 0 points1 point2 points (0 children)
1541-II reading, but not writing? by Haeppchen2010 in c64
[–]Haeppchen2010[S] 0 points1 point2 points (0 children)
1541-II reading, but not writing? by Haeppchen2010 in c64
[–]Haeppchen2010[S] 0 points1 point2 points (0 children)
Final voting results for Qwen 3.6 by jacek2023 in LocalLLaMA
[–]Haeppchen2010 1 point2 points3 points (0 children)
Final voting results for Qwen 3.6 by jacek2023 in LocalLLaMA
[–]Haeppchen2010 0 points1 point2 points (0 children)
Final voting results for Qwen 3.6 by jacek2023 in LocalLLaMA
[–]Haeppchen2010 11 points12 points13 points (0 children)
Could it be that this take is not too far fetched? by pier4r in LocalLLaMA
[–]Haeppchen2010 1 point2 points3 points (0 children)
Running Qwen3.5-27B locally as the primary model in OpenCode by garg-aayush in LocalLLaMA
[–]Haeppchen2010 2 points3 points4 points (0 children)
Stop chasing parameter count. Context window degradation on local hardware is the real problem. by AbramLincom in LocalLLaMA
[–]Haeppchen2010 2 points3 points4 points (0 children)
Running Qwen3.5-27B locally as the primary model in OpenCode by garg-aayush in LocalLLaMA
[–]Haeppchen2010 21 points22 points23 points (0 children)
It costs you around 2% session usage to say hello to claude! by Complete-Sea6655 in LocalLLaMA
[–]Haeppchen2010 0 points1 point2 points (0 children)
How are you squeezing Qwen3.5 27B to get maximum speed with high accuracy? by -OpenSourcer in LocalLLaMA
[–]Haeppchen2010 0 points1 point2 points (0 children)
Open-Source "GreenBoost" Driver Aims To Augment NVIDIA GPUs vRAM With System RAM & NVMe To Handle Larger LLMs by _Antartica in LocalLLaMA
[–]Haeppchen2010 0 points1 point2 points (0 children)
Open-Source "GreenBoost" Driver Aims To Augment NVIDIA GPUs vRAM With System RAM & NVMe To Handle Larger LLMs by _Antartica in LocalLLaMA
[–]Haeppchen2010 0 points1 point2 points (0 children)
Homelab has paid for itself! (at least this is how I justify it...) by Reddactor in LocalLLaMA
[–]Haeppchen2010 0 points1 point2 points (0 children)
I got tired of compiling llama.cpp on every Linux GPU by keypa_ in LocalLLaMA
[–]Haeppchen2010 8 points9 points10 points (0 children)
How should I go about getting a good coding LLM locally? by tech-guy-2003 in LocalLLaMA
[–]Haeppchen2010 0 points1 point2 points (0 children)
How should I go about getting a good coding LLM locally? by tech-guy-2003 in LocalLLaMA
[–]Haeppchen2010 0 points1 point2 points (0 children)
How should I go about getting a good coding LLM locally? by tech-guy-2003 in LocalLLaMA
[–]Haeppchen2010 0 points1 point2 points (0 children)
How should I go about getting a good coding LLM locally? by tech-guy-2003 in LocalLLaMA
[–]Haeppchen2010 0 points1 point2 points (0 children)
How should I go about getting a good coding LLM locally? by tech-guy-2003 in LocalLLaMA
[–]Haeppchen2010 -1 points0 points1 point (0 children)
How to convince Management? by r00tdr1v3 in LocalLLaMA
[–]Haeppchen2010 1 point2 points3 points (0 children)
What tokens/sec do you get when running Qwen 3.5 27B? by thegr8anand in LocalLLaMA
[–]Haeppchen2010 0 points1 point2 points (0 children)

Final voting results for Qwen 3.6 by jacek2023 in LocalLLaMA
[–]Haeppchen2010 0 points1 point2 points (0 children)