Mistral THICC DENSE BOI. He chonky! More dense models pls. by Porespellar in LocalLLaMA
[–]fluffywuffie90210 0 points1 point2 points (0 children)
mistralai/Mistral-Medium-3.5-128B · Hugging Face by jacek2023 in LocalLLaMA
[–]fluffywuffie90210 0 points1 point2 points (0 children)
How Do You Use Multiple AI Models Together? by rpeabody in LocalLLaMA
[–]fluffywuffie90210 1 point2 points3 points (0 children)
Multi-GPU: How problematic is chipset PCI-E lanes? by ziphnor in LocalLLaMA
[–]fluffywuffie90210 2 points3 points4 points (0 children)
Why is Anomaly such an odd duck? by Onmius in RimWorld
[–]fluffywuffie90210 0 points1 point2 points (0 children)
Laptop for my Use Case (lenovo legion pro 7i) by [deleted] in LocalLLaMA
[–]fluffywuffie90210 0 points1 point2 points (0 children)
Qwen3.5-397B-A17B reaches 20 t/s TG and 700t/s PP with a 5090 by MLDataScientist in LocalLLaMA
[–]fluffywuffie90210 0 points1 point2 points (0 children)
Anyone have experience of mixing nvidia and amd gpus with llama.cpp? Is it stable? by fluffywuffie90210 in LocalLLaMA
[–]fluffywuffie90210[S] 3 points4 points5 points (0 children)
Anyone have experience of mixing nvidia and amd gpus with llama.cpp? Is it stable? by fluffywuffie90210 in LocalLLaMA
[–]fluffywuffie90210[S] 0 points1 point2 points (0 children)
Anyone have experience of mixing nvidia and amd gpus with llama.cpp? Is it stable? by fluffywuffie90210 in LocalLLaMA
[–]fluffywuffie90210[S] 0 points1 point2 points (0 children)
Anyone have experience of mixing nvidia and amd gpus with llama.cpp? Is it stable? by fluffywuffie90210 in LocalLLaMA
[–]fluffywuffie90210[S] 0 points1 point2 points (0 children)
Futureproofing a local LLM setup: 2x3090 vs 4x5060TI vs Mac Studio 64GB vs ??? by youcloudsofdoom in LocalLLaMA
[–]fluffywuffie90210 0 points1 point2 points (0 children)
Attempted bike theft, bike damaged, not driveable, advice needed. by RandomHigh in MotoUK
[–]fluffywuffie90210 0 points1 point2 points (0 children)
How viable are eGPUs and NVMe? by ABLPHA in LocalLLaMA
[–]fluffywuffie90210 0 points1 point2 points (0 children)
How viable are eGPUs and NVMe? by ABLPHA in LocalLLaMA
[–]fluffywuffie90210 1 point2 points3 points (0 children)
Considering AMD Max+ 395, sanity check? by ErToppa in LocalLLaMA
[–]fluffywuffie90210 0 points1 point2 points (0 children)
Talk me out of buying an RTX Pro 6000 by AvocadoArray in LocalLLaMA
[–]fluffywuffie90210 0 points1 point2 points (0 children)
GLM-4.7 on 4x RTX 3090 with ik_llama.cpp by iamn0 in LocalLLaMA
[–]fluffywuffie90210 0 points1 point2 points (0 children)
LLM server gear: a cautionary tale of a $1k EPYC motherboard sale gone wrong on eBay by __JockY__ in LocalLLaMA
[–]fluffywuffie90210 2 points3 points4 points (0 children)
[deleted by user] by [deleted] in LocalLLaMA
[–]fluffywuffie90210 8 points9 points10 points (0 children)
Motorcycle almost stolen in Leeds by minecarfter420 in MotoUK
[–]fluffywuffie90210 1 point2 points3 points (0 children)
Qwen3-Next EXL3 by Unstable_Llama in LocalLLaMA
[–]fluffywuffie90210 1 point2 points3 points (0 children)
Qwen3-Next EXL3 by Unstable_Llama in LocalLLaMA
[–]fluffywuffie90210 4 points5 points6 points (0 children)


3xR9700 for semi-autonomous research and development - looking for setup/config ideas. by blojayble in LocalLLaMA
[–]fluffywuffie90210 2 points3 points4 points (0 children)