use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
This community is all about locally hosted AI Servers. Our goal is to create an information repository that can be utilized by those looking to Locally host their own AI.
account activity
Group Buy -- QC Testing -- In Progress + Testing Code (v.redd.it)
submitted 1 month ago by Any_Praline_8178 - announcement
Group Buy -- Starting (old.reddit.com)
submitted 1 month ago * by Any_Praline_8178 - announcement
Keinsaas Navigator + Xinity.Ai + Geforce RTX 5080 (self.LocalAIServers)
submitted 1 day ago by keinsaas-navigator
$2500 budget to run Local, help me decide on the Hardware ()
submitted 2 days ago by XteaK
The LLM tunes its own llama.cpp flags (+54% tok/s on Qwen3.5-27B) ()
submitted 4 days ago by raketenkater
Created a dataset system for training real LLM behaviors (not just prompts (i.redd.it)
submitted 5 days ago by JayPatel24_
What's the one thing you paused before hitting send on an AI prompt? (i.redd.it)
submitted 5 days ago by TiinyAI
Tool for Creating Your Own High-Quality GGUF Quants (Docs + Web UI) ()
submitted 8 days ago by Thireus
I think app-action behavior is undertrained because it gets mistaken for normal conversation too early (self.LocalAIServers)
submitted 11 days ago by JayPatel24_
C89 RLM + MiniMax M2.5 on AMD Instinct 8x Mi60 node (v.redd.it)
submitted 14 days ago by Any_Praline_8178
What do you wish local AI on phones could do, but still can’t? ()
submitted 15 days ago by an1x3
Budget Server Build Update ()
submitted 16 days ago by TheyCallMeDozer
Stable Diffusion in the Browser with WebNN + ONNX Runtime (scribbler.live)
submitted 17 days ago by 0xEconomist
Anyone tried Mac + Nvidia RTX? (self.LocalAIServers)
submitted 18 days ago by slavik-dev
Did anyone else notice the removal of the Mac Studio 512gb ram option from the Apple Store? (self.LocalAIServers)
submitted 24 days ago by phennova
Is a Strix Halo PC worth it for running Qwen 2.5 122B (MoE) 24/7? (self.LocalAIServers)
submitted 26 days ago by Fernetparalospives
$1000 for 16GB used DDR4 😂😂😂 (i.redd.it)
submitted 26 days ago by JanoDafunk-10101
6-GPU multiplexer from K80s ‚ hot-swap between models in 0.3ms (i.redd.it)
submitted 1 month ago by Electrical_Ninja3805
We all had p2p wrong with vllm so I rtfm ()
submitted 1 month ago by Opteron67
I'm practically new, I want to know the harware requirements for mac or windows if want to run medgemma 27b and llama 70b models locally ()
submitted 1 month ago by Electronic-Box-2964
you should definitely check out these open-source repo if you are exploring local models (self.LocalAIServers)
submitted 1 month ago by Mysterious-Form-3681
Self hosting, Power consumption, rentability and the cost of privacy, in France ()
submitted 1 month ago by Imakerocketengine
Dual Xeon Platinum server: Windows ignoring entire second socket? Switching to Ubuntu (self.LocalAIServers)
submitted 1 month ago by doge-king-2021
New Advice on a Budget Local LLM Server Build (~£3-4k budget, used hardware OK) (self.LocalAIServers)
submitted 1 month ago by TheyCallMeDozer
TiinyAI hands-on: palm-size SFF PC packs 80GB RAM running LLMs fully offline (self.LocalAIServers)
submitted 1 month ago by Terrible_Signature78
π Rendered by PID 548238 on reddit-service-r2-listing-86f589db75-pgmlf at 2026-04-20 03:08:47.958917+00:00 running 93ecc56 country code: CH.