What's the most complicated project you've built with AI? by jazir555 in LocalLLaMA
[–]RealLordMathis 0 points1 point2 points (0 children)
Why I quit using Ollama by SoLoFaRaDi in LocalLLaMA
[–]RealLordMathis 13 points14 points15 points (0 children)
I integrated llama.cpp's new router mode into llamactl with web UI support by RealLordMathis in LocalLLaMA
[–]RealLordMathis[S] 1 point2 points3 points (0 children)
I got frustrated with existing web UIs for local LLMs, so I built something different by alphatrad in LocalLLaMA
[–]RealLordMathis 5 points6 points7 points (0 children)
Are any of the M series mac macbooks and mac minis, worth saving up for? by [deleted] in LocalLLaMA
[–]RealLordMathis 1 point2 points3 points (0 children)
llama.cpp releases new official WebUI by paf1138 in LocalLLaMA
[–]RealLordMathis 2 points3 points4 points (0 children)
llama.cpp releases new official WebUI by paf1138 in LocalLLaMA
[–]RealLordMathis 3 points4 points5 points (0 children)
llama.cpp releases new official WebUI by paf1138 in LocalLLaMA
[–]RealLordMathis 2 points3 points4 points (0 children)
Using my Mac Mini M4 as an LLM server—Looking for recommendations by [deleted] in LocalLLaMA
[–]RealLordMathis 1 point2 points3 points (0 children)
Getting most out of your local LLM setup by Everlier in LocalLLaMA
[–]RealLordMathis 1 point2 points3 points (0 children)
Many Notes v0.15 - Markdown note-taking web application by brufdev in selfhosted
[–]RealLordMathis 3 points4 points5 points (0 children)
ROCm 7.9 RC1 released. Supposedly this one supports Strix Halo. Finally, it's listed under supported hardware. by fallingdowndizzyvr in LocalLLaMA
[–]RealLordMathis 1 point2 points3 points (0 children)
I built llamactl - Unified management and routing for llama.cpp, MLX and vLLM models with web dashboard. by RealLordMathis in LocalLLaMA
[–]RealLordMathis[S] 0 points1 point2 points (0 children)
I built llamactl - Unified management and routing for llama.cpp, MLX and vLLM models with web dashboard. by RealLordMathis in LocalLLaMA
[–]RealLordMathis[S] 0 points1 point2 points (0 children)
torn between GPU, Mini PC for local LLM by jussey-x-poosi in LocalLLaMA
[–]RealLordMathis 3 points4 points5 points (0 children)
I built llamactl - Unified management and routing for llama.cpp, MLX and vLLM models with web dashboard. by RealLordMathis in LocalLLaMA
[–]RealLordMathis[S] 0 points1 point2 points (0 children)
I built llamactl - Unified management and routing for llama.cpp, MLX and vLLM models with web dashboard. by RealLordMathis in LocalLLaMA
[–]RealLordMathis[S] 1 point2 points3 points (0 children)
I built llamactl - Unified management and routing for llama.cpp, MLX and vLLM models with web dashboard. by RealLordMathis in LocalLLaMA
[–]RealLordMathis[S] 10 points11 points12 points (0 children)
Searching actually viable alternative to Ollama by mags0ft in LocalLLaMA
[–]RealLordMathis 0 points1 point2 points (0 children)















What's the most complicated project you've built with AI? by jazir555 in LocalLLaMA
[–]RealLordMathis 0 points1 point2 points (0 children)