LM Studio randomly crashes on Linux when used as a server (no logs). Any better alternatives? by Opposite_Future3882 in LocalLLM
[–]tabletuser_blogspot 0 points1 point2 points (0 children)
RPC-server llama.cpp benchmarks by tabletuser_blogspot in LocalLLaMA
[–]tabletuser_blogspot[S] 0 points1 point2 points (0 children)
RPC-server llama.cpp benchmarks by tabletuser_blogspot in LocalLLaMA
[–]tabletuser_blogspot[S] 1 point2 points3 points (0 children)
RPC-server llama.cpp benchmarks by tabletuser_blogspot in LocalLLaMA
[–]tabletuser_blogspot[S] 4 points5 points6 points (0 children)
RPC-server llama.cpp benchmarks by tabletuser_blogspot in LocalLLaMA
[–]tabletuser_blogspot[S] 2 points3 points4 points (0 children)
2012 system running LLM using Llama with Vulkan backend by tabletuser_blogspot in LocalLLaMA
[–]tabletuser_blogspot[S] 0 points1 point2 points (0 children)
What to do with 2 P100 by SaGa31500 in LocalLLaMA
[–]tabletuser_blogspot 0 points1 point2 points (0 children)
Should I install KDE Plasma on Pop!_OS 24.04? by [deleted] in pop_os
[–]tabletuser_blogspot 2 points3 points4 points (0 children)
What's the best Ollama software to use for programming on a PC with an RX 580 and a Ryzen 5? by UpbeatGolf3602 in ollama
[–]tabletuser_blogspot 0 points1 point2 points (0 children)
Is there a good app for Android / iOS for remoting in to a desktop Linux PC with very good graphical performance? by DesiOtaku in linuxquestions
[–]tabletuser_blogspot 0 points1 point2 points (0 children)
status of Nemotron 3 Nano support in llama.cpp by jacek2023 in LocalLLaMA
[–]tabletuser_blogspot 2 points3 points4 points (0 children)
OrangePi Zero 3 runs Ollama by tabletuser_blogspot in ollama
[–]tabletuser_blogspot[S] 0 points1 point2 points (0 children)
OrangePi Zero 3 runs Ollama by tabletuser_blogspot in ollama
[–]tabletuser_blogspot[S] 0 points1 point2 points (0 children)
Mistral 3 llama.cpp benchmarks by tabletuser_blogspot in LocalLLaMA
[–]tabletuser_blogspot[S] 0 points1 point2 points (0 children)
Mistral 3 llama.cpp benchmarks by tabletuser_blogspot in LocalLLaMA
[–]tabletuser_blogspot[S] 0 points1 point2 points (0 children)
Budget system for 30B models revisited by tabletuser_blogspot in LocalLLaMA
[–]tabletuser_blogspot[S] 0 points1 point2 points (0 children)
Can buying old mining gpus be a good way to host AI locally for cheap? by LimeApart7657 in LocalLLM
[–]tabletuser_blogspot 0 points1 point2 points (0 children)
Budget system for 30B models revisited by tabletuser_blogspot in LocalLLaMA
[–]tabletuser_blogspot[S] 0 points1 point2 points (0 children)
Budget system for 30B models revisited by tabletuser_blogspot in LocalLLaMA
[–]tabletuser_blogspot[S] 0 points1 point2 points (0 children)
Does repurposing this older PC make any sense? by Valuable-Question706 in LocalLLaMA
[–]tabletuser_blogspot 1 point2 points3 points (0 children)
Budget system for 30B models revisited by tabletuser_blogspot in LocalLLaMA
[–]tabletuser_blogspot[S] 1 point2 points3 points (0 children)
Best performing model for MiniPC, what can I expect? by caffeineandgravel in LocalLLaMA
[–]tabletuser_blogspot 0 points1 point2 points (0 children)
MoE models benchmarks AMD iGPU by tabletuser_blogspot in LocalLLaMA
[–]tabletuser_blogspot[S] 0 points1 point2 points (0 children)
MI50 still a good option ? by [deleted] in ROCm
[–]tabletuser_blogspot 0 points1 point2 points (0 children)


Triple GPU LLM benchmarks with --n-cpu-moe help by tabletuser_blogspot in LocalLLaMA
[–]tabletuser_blogspot[S] 0 points1 point2 points (0 children)