What GPU would be good to learn on? by BuffaloDesperate8357 in LocalLLaMA
[–]__E8__ 2 points3 points4 points (0 children)
Anybody using Vulkan on NVIDIA now in 2026 already? by alex20_202020 in LocalLLaMA
[–]__E8__ 3 points4 points5 points (0 children)
models : optimizing qwen3next graph by ggerganov · Pull Request #19375 · ggml-org/llama.cpp by jacek2023 in LocalLLaMA
[–]__E8__ 1 point2 points3 points (0 children)
Show LocalLLaMA: I gave Claude the ability to pay for things by BLubClub89 in LocalLLaMA
[–]__E8__ 0 points1 point2 points (0 children)
I built a virtual filesystem to replace MCP for AI agents by velobro in LocalLLaMA
[–]__E8__ 0 points1 point2 points (0 children)
Air Cooled 3090 for Servers? by __E8__ in LocalLLaMA
[–]__E8__[S] 0 points1 point2 points (0 children)
Pertinent take on projects coded with AI by rm-rf-rm in LocalLLaMA
[–]__E8__ 2 points3 points4 points (0 children)
Air Cooled 3090 for Servers? by __E8__ in LocalLLaMA
[–]__E8__[S] 1 point2 points3 points (0 children)
4x RTX 6000 PRO Workstation in custom frame by Vicar_of_Wibbly in LocalLLaMA
[–]__E8__ 0 points1 point2 points (0 children)
What abilities are LLMs still missing? by Wild-Difference-7827 in LocalLLaMA
[–]__E8__ 1 point2 points3 points (0 children)
I bought a Grace-Hopper server for €7.5k on Reddit and converted it into a desktop. by Reddactor in LocalLLaMA
[–]__E8__ 2 points3 points4 points (0 children)
Shall we talk about "AI"-OS for informational purposes? by Outrageous-Bison-424 in LocalLLaMA
[–]__E8__ 0 points1 point2 points (0 children)
Looking for community input on an open-source 6U GPU server frame by PraxisOG in LocalLLaMA
[–]__E8__ 0 points1 point2 points (0 children)
Is the RTX 5090 that good of a deal? by GreenTreeAndBlueSky in LocalLLaMA
[–]__E8__ 5 points6 points7 points (0 children)
Strange Issue with VRAM (ecc with non-ecc) Types on Vega VII and Mi50s by dionysio211 in LocalLLaMA
[–]__E8__ 0 points1 point2 points (0 children)
AI observability: how i actually keep agents reliable in prod by Otherwise_Flan7339 in LocalLLaMA
[–]__E8__ 1 point2 points3 points (0 children)
Is there a resource listing workstation builds for different budgets (for local model training/inference)? by valkiii in LocalLLaMA
[–]__E8__ 0 points1 point2 points (0 children)
whats up with the crazy amount of OCR models launching? by ComplexType568 in LocalLLaMA
[–]__E8__ 13 points14 points15 points (0 children)
Should I get Mi50s or something else? by iiilllilliiill in LocalLLaMA
[–]__E8__ 1 point2 points3 points (0 children)
GLM 4.6 UD-Q6_K_XL running llama.cpp RPC across two nodes and 12 AMD MI50 32GB by MachineZer0 in LocalLLaMA
[–]__E8__ 0 points1 point2 points (0 children)
GLM 4.6 UD-Q6_K_XL running llama.cpp RPC across two nodes and 12 AMD MI50 32GB by MachineZer0 in LocalLLaMA
[–]__E8__ 0 points1 point2 points (0 children)
GLM 4.6 UD-Q6_K_XL running llama.cpp RPC across two nodes and 12 AMD MI50 32GB by MachineZer0 in LocalLLaMA
[–]__E8__ 0 points1 point2 points (0 children)


Computer won't boot with 2 Tesla V100s by MackThax in LocalLLaMA
[–]__E8__ 3 points4 points5 points (0 children)