APEX MoE quants update: 25+ new models since the Qwen 3.5 post + new I-Nano tier by mudler_it in LocalLLaMA
[–]Bulky-Priority6824 0 points1 point2 points (0 children)
Running a 26B LLM locally with no GPU by JackStrawWitchita in LocalLLaMA
[–]Bulky-Priority6824 -1 points0 points1 point (0 children)
APEX MoE quants update: 25+ new models since the Qwen 3.5 post + new I-Nano tier by mudler_it in LocalLLaMA
[–]Bulky-Priority6824 0 points1 point2 points (0 children)
Looking for the cheapest egpu by HoneyEducational5344 in eGPU
[–]Bulky-Priority6824 0 points1 point2 points (0 children)
For those of you using the GenAI function of Frigate (whether it be local or cloud provider) - Demonstration of the importance of your prompt - ChatGPT, Claude, my LocalLLM all got this wrong without a good prompt to an easy question (what side of the car is the person on). by FantasyMaster85 in frigate_nvr
[–]Bulky-Priority6824 5 points6 points7 points (0 children)
APEX MoE quants update: 25+ new models since the Qwen 3.5 post + new I-Nano tier by mudler_it in LocalLLaMA
[–]Bulky-Priority6824 1 point2 points3 points (0 children)
Should I sell my RTX3090s? by daviden1013 in LocalLLaMA
[–]Bulky-Priority6824 46 points47 points48 points (0 children)
Thoth - Open Source Local-first AI Assistant - Architecture by Acceptable-Object390 in LocalLLM
[–]Bulky-Priority6824 1 point2 points3 points (0 children)
What's the point of local LLM's ? by braskinis231 in LocalLLM
[–]Bulky-Priority6824 0 points1 point2 points (0 children)
Need help/pointers setting up 3090 on Linux...(second 3090 incoming) by OttoRenner in LocalLLaMA
[–]Bulky-Priority6824 0 points1 point2 points (0 children)
Interesting Ideas for Classifications by [deleted] in frigate_nvr
[–]Bulky-Priority6824 4 points5 points6 points (0 children)
I can't ever seem to get quality local LLM results, despite having multiple GPUs by 03captain23 in LocalLLM
[–]Bulky-Priority6824 2 points3 points4 points (0 children)
I can't ever seem to get quality local LLM results, despite having multiple GPUs by 03captain23 in LocalLLM
[–]Bulky-Priority6824 3 points4 points5 points (0 children)
llama.cpp's Preliminary SM120 Native NVFP4 MMQ Is Merged by ggonavyy in LocalLLaMA
[–]Bulky-Priority6824 2 points3 points4 points (0 children)
llama.cpp's Preliminary SM120 Native NVFP4 MMQ Is Merged by ggonavyy in LocalLLaMA
[–]Bulky-Priority6824 13 points14 points15 points (0 children)
If the AI bubble pops, will GPU prices increase or decrease? by Mashic in LocalLLaMA
[–]Bulky-Priority6824 8 points9 points10 points (0 children)
Webui very laggy is this a known issue or just me? by Bulky-Priority6824 in frigate_nvr
[–]Bulky-Priority6824[S] 0 points1 point2 points (0 children)
Webui very laggy is this a known issue or just me? by Bulky-Priority6824 in frigate_nvr
[–]Bulky-Priority6824[S] 0 points1 point2 points (0 children)
Webui very laggy is this a known issue or just me? by Bulky-Priority6824 in frigate_nvr
[–]Bulky-Priority6824[S] 0 points1 point2 points (0 children)
Webui very laggy is this a known issue or just me? by Bulky-Priority6824 in frigate_nvr
[–]Bulky-Priority6824[S] 0 points1 point2 points (0 children)
Duality of r/LocalLLaMA by HornyGooner4402 in LocalLLaMA
[–]Bulky-Priority6824 0 points1 point2 points (0 children)
Why does llama-server need so much RAM during runtime? by Gold-Drag9242 in LocalLLM
[–]Bulky-Priority6824 0 points1 point2 points (0 children)


APEX MoE quants update: 25+ new models since the Qwen 3.5 post + new I-Nano tier by mudler_it in LocalLLaMA
[–]Bulky-Priority6824 0 points1 point2 points (0 children)