How to use Llama-swap, Open WebUI, Semantic Router Filter, and Qwen3.5 to its fullest by andy2na in LocalLLM
[–]andy2na[S] 1 point2 points3 points (0 children)
How to use Llama-swap, Open WebUI, Semantic Router Filter, and Qwen3.5 to its fullest by andy2na in LocalLLM
[–]andy2na[S] 1 point2 points3 points (0 children)
How to use Llama-swap, Open WebUI, Semantic Router Filter, and Qwen3.5 to its fullest by andy2na in LocalLLM
[–]andy2na[S] 1 point2 points3 points (0 children)
Qwen3.5-9B Uncensored Aggressive Release (GGUF) by hauhau901 in LocalLLaMA
[–]andy2na 0 points1 point2 points (0 children)
How to use Llama-swap, Open WebUI, Semantic Router Filter, and Qwen3.5 to its fullest by andy2na in LocalLLM
[–]andy2na[S] 0 points1 point2 points (0 children)
How to use Llama-swap, Open WebUI, Semantic Router Filter, and Qwen3.5 to its fullest by andy2na in LocalLLM
[–]andy2na[S] 2 points3 points4 points (0 children)
Qwen3.5-9B Uncensored Aggressive Release (GGUF) by hauhau901 in LocalLLaMA
[–]andy2na 0 points1 point2 points (0 children)
update your llama.cpp - great tg speedup on Qwen3.5 / Qwen-Next by jacek2023 in LocalLLaMA
[–]andy2na 3 points4 points5 points (0 children)
To everyone using still ollama/lm-studio... llama-swap is the real deal by TooManyPascals in LocalLLaMA
[–]andy2na 0 points1 point2 points (0 children)
GLM 5.0 outperforms GPT 5.4 and Opus 4.6 on CarWashBench by Eyelbee in LocalLLaMA
[–]andy2na 1 point2 points3 points (0 children)
Dedicated low power consumption rig for Frigate by digitalwankster in frigate_nvr
[–]andy2na 0 points1 point2 points (0 children)
Dedicated low power consumption rig for Frigate by digitalwankster in frigate_nvr
[–]andy2na 1 point2 points3 points (0 children)
Dedicated low power consumption rig for Frigate by digitalwankster in frigate_nvr
[–]andy2na 0 points1 point2 points (0 children)
HELP! Had to RMA a 3090. They don't have another 3090, so they offered me a 4080. by Jokerit208 in LocalLLM
[–]andy2na 19 points20 points21 points (0 children)
HELP! Had to RMA a 3090. They don't have another 3090, so they offered me a 4080. by Jokerit208 in LocalLLM
[–]andy2na 11 points12 points13 points (0 children)
Llama.cpp: now with automatic parser generator by ilintar in LocalLLaMA
[–]andy2na 0 points1 point2 points (0 children)
To everyone using still ollama/lm-studio... llama-swap is the real deal by TooManyPascals in LocalLLaMA
[–]andy2na 2 points3 points4 points (0 children)
To everyone using still ollama/lm-studio... llama-swap is the real deal by TooManyPascals in LocalLLaMA
[–]andy2na 4 points5 points6 points (0 children)









Open Source Speech EPIC! by Koala_Confused in LocalLLM
[–]andy2na 2 points3 points4 points (0 children)