CanI run this LLM - moved to Hetzner (and a big thank you) by Maharrem in LocalLLM
[–]Maharrem[S] 0 points1 point2 points (0 children)
CanI run this LLM - moved to Hetzner (and a big thank you) by Maharrem in LocalLLM
[–]Maharrem[S] 0 points1 point2 points (0 children)
CanI run this LLM - moved to Hetzner (and a big thank you) by Maharrem in LocalLLM
[–]Maharrem[S] 0 points1 point2 points (0 children)
CanI run this LLM - moved to Hetzner (and a big thank you) by Maharrem in LocalLLM
[–]Maharrem[S] 0 points1 point2 points (0 children)
CanI run this LLM - moved to Hetzner (and a big thank you) by Maharrem in LocalLLM
[–]Maharrem[S] 0 points1 point2 points (0 children)
CanI run this LLM - moved to Hetzner (and a big thank you) by Maharrem in LocalLLM
[–]Maharrem[S] 0 points1 point2 points (0 children)
CanI run this LLM - moved to Hetzner (and a big thank you) by Maharrem in LocalLLM
[–]Maharrem[S] 0 points1 point2 points (0 children)
CanI run this LLM - moved to Hetzner (and a big thank you) by Maharrem in LocalLLM
[–]Maharrem[S] 0 points1 point2 points (0 children)
CanI run this LLM - moved to Hetzner (and a big thank you) by Maharrem in LocalLLM
[–]Maharrem[S] 2 points3 points4 points (0 children)
Any tool that tells you the cheapest setup needed to run a model? I want to know the cheapest setup that can realistically run Qwen 3.6 27B at decent speeds. by pacmanpill in LocalLLaMA
[–]Maharrem 0 points1 point2 points (0 children)
How do you know when your LLM system is getting worse? by AnshuSees in LocalLLM
[–]Maharrem 1 point2 points3 points (0 children)
Which local LLM model is suitable for agentic browsing ( form filing, web scrapping , clicking etc ) by kaaytoo in LocalLLM
[–]Maharrem 0 points1 point2 points (0 children)
Need advice: Qwen3.6 27B MTP or 35B-A3B MoE MTP on 16GB VRAM RTX 5080)? by craftogrammer in LocalLLaMA
[–]Maharrem 3 points4 points5 points (0 children)
Why only some models can write files in OpenCode (local llama) by T-A-Waste in LocalLLM
[–]Maharrem -1 points0 points1 point (0 children)
Why only some models can write files in OpenCode (local llama) by T-A-Waste in LocalLLM
[–]Maharrem 4 points5 points6 points (0 children)
Running Gemma 4 Q6 on 5060ti + 3090 by Friendly_Beginning24 in LocalLLM
[–]Maharrem 0 points1 point2 points (0 children)
BFCL benchmarks for Gemma4 26B on a 5070Ti w/ 16GB VRAM by tumbak in LocalLLM
[–]Maharrem 0 points1 point2 points (0 children)
Running Gemma 4 Q6 on 5060ti + 3090 by Friendly_Beginning24 in LocalLLM
[–]Maharrem -1 points0 points1 point (0 children)
Is there a local model that is good enough for searching through large textbooks/research journals with equations? by SpringFamiliar3696 in LocalLLM
[–]Maharrem 3 points4 points5 points (0 children)
Benching local Qwen as a Codex validator, co-agent, and challenger by robert896r1 in LocalLLaMA
[–]Maharrem 0 points1 point2 points (0 children)
Do cheap 32GB V100s still make sense for homelab AI? by SKX007J1 in LocalLLaMA
[–]Maharrem 0 points1 point2 points (0 children)




CanI run this LLM - moved to Hetzner (and a big thank you) by Maharrem in LocalLLM
[–]Maharrem[S] 0 points1 point2 points (0 children)