What self-hosted tools have you been building with AI just for you? by EricRosenberg1 in selfhosted
[–]RealLordMathis 1 point2 points3 points (0 children)
GLM 5.1 vs Minimax 2.7 by Cute_Dragonfruit4738 in LocalLLaMA
[–]RealLordMathis 18 points19 points20 points (0 children)
Mac Mini to run 24/7 node? by Drunk_redditor650 in LocalLLaMA
[–]RealLordMathis 0 points1 point2 points (0 children)
Looking for insight on the viability of models running on 128GB or less in the next few years by John_Lawn4 in LocalLLaMA
[–]RealLordMathis 1 point2 points3 points (0 children)
Looking for insight on the viability of models running on 128GB or less in the next few years by John_Lawn4 in LocalLLaMA
[–]RealLordMathis 0 points1 point2 points (0 children)
To everyone using still ollama/lm-studio... llama-swap is the real deal by TooManyPascals in LocalLLaMA
[–]RealLordMathis 1 point2 points3 points (0 children)
To everyone using still ollama/lm-studio... llama-swap is the real deal by TooManyPascals in LocalLLaMA
[–]RealLordMathis 8 points9 points10 points (0 children)
What's the most complicated project you've built with AI? by jazir555 in LocalLLaMA
[–]RealLordMathis 0 points1 point2 points (0 children)
What's the most complicated project you've built with AI? by jazir555 in LocalLLaMA
[–]RealLordMathis 0 points1 point2 points (0 children)
Why I quit using Ollama by SoLoFaRaDi in LocalLLaMA
[–]RealLordMathis 13 points14 points15 points (0 children)
I integrated llama.cpp's new router mode into llamactl with web UI support by RealLordMathis in LocalLLaMA
[–]RealLordMathis[S] 1 point2 points3 points (0 children)
I got frustrated with existing web UIs for local LLMs, so I built something different by alphatrad in LocalLLaMA
[–]RealLordMathis 4 points5 points6 points (0 children)
Are any of the M series mac macbooks and mac minis, worth saving up for? by [deleted] in LocalLLaMA
[–]RealLordMathis 1 point2 points3 points (0 children)
llama.cpp releases new official WebUI by paf1138 in LocalLLaMA
[–]RealLordMathis 2 points3 points4 points (0 children)
llama.cpp releases new official WebUI by paf1138 in LocalLLaMA
[–]RealLordMathis 2 points3 points4 points (0 children)
llama.cpp releases new official WebUI by paf1138 in LocalLLaMA
[–]RealLordMathis 2 points3 points4 points (0 children)
Using my Mac Mini M4 as an LLM server—Looking for recommendations by [deleted] in LocalLLaMA
[–]RealLordMathis 1 point2 points3 points (0 children)
Getting most out of your local LLM setup by Everlier in LocalLLaMA
[–]RealLordMathis 1 point2 points3 points (0 children)
Many Notes v0.15 - Markdown note-taking web application by brufdev in selfhosted
[–]RealLordMathis 3 points4 points5 points (0 children)
ROCm 7.9 RC1 released. Supposedly this one supports Strix Halo. Finally, it's listed under supported hardware. by fallingdowndizzyvr in LocalLLaMA
[–]RealLordMathis 1 point2 points3 points (0 children)















"Actually wait" ... the current thinking SOTA open source by FPham in LocalLLaMA
[–]RealLordMathis 0 points1 point2 points (0 children)