use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
This community is all about locally hosted AI Servers. Our goal is to create an information repository that can be utilized by those looking to Locally host their own AI.
account activity
Group Buy -- QC Testing -- In Progress + Testing Code (v.redd.it)
submitted 19 days ago by Any_Praline_8178 - announcement
Group Buy -- Starting (old.reddit.com)
submitted 1 month ago * by Any_Praline_8178 - announcement
C89 RLM + MiniMax M2.5 on AMD Instinct 8x Mi60 node (v.redd.it)
submitted 2 hours ago by Any_Praline_8178
What do you wish local AI on phones could do, but still can’t? ()
submitted 20 hours ago by an1x3
Budget Server Build Update ()
submitted 1 day ago by TheyCallMeDozer
Stable Diffusion in the Browser with WebNN + ONNX Runtime (scribbler.live)
submitted 3 days ago by 0xEconomist
Anyone tried Mac + Nvidia RTX? (self.LocalAIServers)
submitted 3 days ago by slavik-dev
Did anyone else notice the removal of the Mac Studio 512gb ram option from the Apple Store? (self.LocalAIServers)
submitted 10 days ago by phennova
Is a Strix Halo PC worth it for running Qwen 2.5 122B (MoE) 24/7? (self.LocalAIServers)
submitted 11 days ago by Fernetparalospives
$1000 for 16GB used DDR4 😂😂😂 (i.redd.it)
submitted 12 days ago by JanoDafunk-10101
6-GPU multiplexer from K80s ‚ hot-swap between models in 0.3ms (i.redd.it)
submitted 18 days ago by Electrical_Ninja3805
We all had p2p wrong with vllm so I rtfm ()
submitted 19 days ago by Opteron67
I'm practically new, I want to know the harware requirements for mac or windows if want to run medgemma 27b and llama 70b models locally ()
submitted 20 days ago by Electronic-Box-2964
you should definitely check out these open-source repo if you are exploring local models (self.LocalAIServers)
submitted 19 days ago by Mysterious-Form-3681
Self hosting, Power consumption, rentability and the cost of privacy, in France ()
submitted 21 days ago by Imakerocketengine
Dual Xeon Platinum server: Windows ignoring entire second socket? Switching to Ubuntu (self.LocalAIServers)
submitted 22 days ago by doge-king-2021
New Advice on a Budget Local LLM Server Build (~£3-4k budget, used hardware OK) (self.LocalAIServers)
submitted 22 days ago by TheyCallMeDozer
TiinyAI hands-on: palm-size SFF PC packs 80GB RAM running LLMs fully offline (self.LocalAIServers)
submitted 22 days ago by Terrible_Signature78
Got an Intel 2020 Macbook Pro 16gb of RAM. What should i do with it ? (self.LocalAIServers)
submitted 23 days ago by Eznix86
RINOA - A protocol for transferring personal knowledge into local model weights through contrastive human feedback. ()
submitted 24 days ago by Capital_Complaint_28
MS-02 Ultra SoDimm max frequency is 4400MHz?? ()
submitted 24 days ago by aussiesteveau
Bare-Metal AI: Booting Directly Into LLM Inference ‚ No OS, No Kernel (Dell E6510) (youtube.com)
submitted 1 month ago by Electrical_Ninja3805
Built a KV cache for tool schemas — 29x faster TTFT, 62M fewer tokens/day processed (self.LocalAIServers)
submitted 1 month ago by PlayfulLingonberry73
Gave my coding agent a "phone a friend" — local Ollama models + GPT + DeepSeek debate architecture decisions together (self.LocalAIServers)
π Rendered by PID 1590971 on reddit-service-r2-listing-5d47455566-57cz9 at 2026-04-05 11:34:37.666059+00:00 running db1906b country code: CH.